Why do addons randomly attach to various Scrapebox Instances?
Edit: This is an old entry. Im leaving here for reference sake but this was tweaked prior to V 1.16.0 coming out. Addons now attach to the instances they are launched from.
Scrapebox addons are stand alone applications. They use Inter Process Communication (IPC) to communicate with the scrapebox.exe process. Inter Process Communication is not multi process aware, so when you launch an addon windows will scan the process list and it attaches the addon to the first scrapebox instance that it finds.
This is why if you have multiple instances of scrapebox running and you launch an addon it won't always import urls or proxies from the instance that you launched it from. This is a limitation of IPC.
What format do indexer services for the Rapid Indexer need to be in?
The Rapid Indexer list that is loaded into the "Load Indexer Services" option should be formatted like this:
http://www.whoisya.com/{website}
http://www.domain.com/{website}
If you need a list to get you started you can go to the addons menu at the top of scrapebox, then click "Show available addons". Then click on the rapid indexer addon.
Then in the description below you will see a spot that says:
Click HERE to download a small list of indexing sources to get you started.
You can download that list and you can also try searching google:
https://www.google.com/#sclient=psy-ab&hl=en&safe=off&source=hp&q=free+scrapebox+rapid+indexer+list
What are you yellow entries in the malware checker addon?
The Yellow entries in the Scrapebox malware checker addon are entries that previously were listed as having malware but now are clean. So they are clean and good to go and they do not need to be removed from the list.
What are the files that the Scrapebox Crash Dump Logger Creates?
The crash dump logger logs your posting progress in real time, so in the even of a crash, you do not have to pick up where you left off.
I have a video that covers the crash dump logger here:
The logger creates the files in your Scrapebox folder and these are what the files
crashdump_poster_submitted.txt - all submitted or successful urls.
crashdump_master.txt - all urls in your blogs list, meaning your entire list you loaded to post to.
crashdump_poster.txt - all urls that have been processed or completed. This includes urls that are marked as successful and failed.
What operators can be used with the Page Scanner Addon?
The Operators that can be used with the Page Scanner addon are:
~~~~~~~~~~~~~~~~~~~~~~~~~
%and% - using this includes words. For Example:
Scrapebox %and% whitehat
Would look to see if both the words scrapebox and whitehat were on the page
~~~~~~~~~~~~~~~~~~~~~~~~~
Does the Scrapebox Link Checker Addon follow redirects when checking for links?
The Scrapebox link checker addon does follow redirects when checking for links.
Why does the Vanity Checker give false positives for example with Tumblr?
You have to bear in mind that what the vanity checker does is to check for markers in the page that shows that a page is not live. As a general rule with such properties if a page isn't live it can be registered, and thats the assumption that the vanity checker makes.
However some properties, tumblr for example, will ban a sub-domain/username and when they do that it can't ever be re-registered. So it will show up just like one that is available and you won't know its not available until you try and register it. Since the vanity checker is built for speed and doesn't support authentication etc... it does not attempt to register the properties it only checks to see if they match a given set of markers. So the vanity checker should more be considered a "highly accurate assumption based on probable data" then it should be a "this is 100% available to register".
I hear people about every week say this about tumblr, because it such a popular target, that it has had so many sub-domains registered and then banned and so many people are trying to register subdomains on it that already have links that the vast majority of anything that meets common criteria is already taken or banned. Its kind of like trying to find a google passed proxy, but worse, everyone wants them and so they are few and far between. Same goes for tumblr, so many get banned and so many people are looking for ones to register that what is left is all the ones that are permanently banned. So if it seems like your getting a lot of false positives thats not a fault of the vanity checker, its doing its job correctly in that case, its a fault of high demand and low supply and the nature of the game.
So if the vanity checker is saying its available and you load the page and someone already has it registered, then the vanity checker needs to be "fixed", but if its saying its not available and you load the page in a browser and it shows a 404 or not found/doesn't exist, then the vanity checker is working as it should, making the assumption that since it doesn't exist it should be available, and if you then go to try and register it and its not able to be registered, then its just been permanently banned. In this case there is nothing wrong with the vanity checker and for that matter nothing that could even be fixed, its just the nature of low supply, high demand and the way the game is played.
So then you have 2 choices. Churn thru massive amounts of them to try and find a few you can register. Pick a different platform thats not so heavily used, or build in your own platform. (I have a video on this - https://www.youtube.com/watch?v=-F2nr_ltCRo )
What Do The Various Headers In The Malware And Phishing Filter Addon Mean?
Malware List - is as what it means, the site hosted malware
Software List - is attackers on this site might try to trick you into installing programs that harm your browsing experience (for example, by changing your homepage or showing extra ads on sites you visit)
Social List - is social engineering attackers on this site might try to trick you to download software or steal your information (for example passwords, messages, or credit card information)
AS Host - is the subnet
Times Listed - is self-explanatory
Redirect Sites - is a list of domains it sends visitors to
Intermediary Sites - are dangerous sites sending visitors to the checked sites (I think)
Traffic Sources - is where the site is getting traffic from (I think)
Retrieved On - is self-explanatory
Malicious Date - is when the malware was found
The 2 with (I think) is just what I've guessed from checking domains.
What tokens can be used in the Learning Mode Addon Poster?
The Learning Mode Poster was depreciated and replaced with the now more robust Fast Poster that can be trained via ini files. More info on that here:
http://scrapeboxfaq.com/when-training-the-scrapebox-learning-mode-poster-to-new-forms-what-variables-can-be-used
*The Learning Most Poster Addon was rebuilt and is now just built right into Fast Poster in the main Scrapebox*
For tokens that you can use in messages in Fast Poster see:
http://scrapeboxfaq.com/what-tokens-can-i-use-in-the-comments-and-messages-file
The Learning Mode Poster Addon can use the below tokens. Note: these tokens will not work with Fast and Slow poster in the main scrapebox window. For the tokens that work with fast and slow poster, see here.
- %USERNAME% will be replaced with the users name from the file loaded in for usernames.
- %USEREMAIL% will be replaced with the users email from the file loaded in for emails.
- %USERURL% will be replaced with the users website from the file loaded in the user url slot.