Scrapebox Troubleshooting

Error Codes

You may encounter error codes as you use scrapebox.

Most error codes are simply standard internet error codes that you would get if you were using a regular web browser. Examples would be anything:
1xx, 2xx, 3xx, 4xx, 5xx

Error code 0 -  generally means you have bad proxies or you have misconfigured something in scrapebox

For a explanation of error codes choose the help menu in scrapebox:

Help -> Server Error Code Reference

Error 500 when commenting to Wordpress?

This means your data is bad, most likely your emails are invalid like no @ symbol or no .com on the end or similar.

What is the best general Troubleshooting steps?

Generally speaking when you have a problem with scrapebox these are the best steps to follow.  Work thru them until your problem is resolved.

1.) Restart your computer

2.) Update scrapebox to the latest version if there is an update available.  Sometimes things change with Google, or Wordpress etc...  and the updates fix these issues.

3.) Take a moment to go back thru your settings and double check all of the most obvious things.  Often a simple setting can cause great grief.

4.) Reinstall Scrapebox.  Completely delete all of the scrapebox operating files (of course back up any files you are working with, like your names, emails etc... You only need to delete the actual scrapebox files that come with scrapebox originally when you download it).  Then re-download the latest version of scrapebox and try it.  (download here: http://www.scrapebox.com/payment-received)

5.) If all else fails try posting in the sales thread where you purchased scrapebox to see if the user community can help you (if you purchased on a forum),  or you can try the sales thread at BHW, they are very helpful: http://www.blackhatworld.com/blackhat-seo/buy-sell-trade/129096-scrapebox-ultimate-serp-scraper-auto-blog-commenter-prstorm-mode.html

6.) If that doesn't help then you can contact support directly: http://www.scrapebox.com/contact-us

Scrapebox says it is unable to connect to the servers, are they down?

All scrapebox servers are monitored by Pingdom 1 minute monitoring.  Scrapebox has multiple servers, the publicly monitored ones, including the scrapebox licensing servers an scrapebox.com can be found in the Pingdom summary below.

Pingdom summary of Scrapebox servers

Why does scrapebox freeze or hang - Fast Poster

There are many reasons why scrapebox can freeze or hang.

Before figuring out what is causing Scrapebox to freeze, you need to identify what type of freeze is happening.

Black Window -   If your Scrapebox window has gone completely black, then generally it means that it froze up because of a lack of resources.  You ran out of available processor power or you ran out of Ram.

You can also see a black window if you are using RDP to connect to a remote server such as a VPS or Dedicated server.  This happens because your pc and the server just aren't exchanging data fast enough.  Generally if you leave this alone it will fix its self, or you can log out of your RDP session and log back in.  If you are constantly getting black windows using RDP, then its either your pc or the server just doesn't have enough resources, or the internet connection is too slow or too busy.

If your constantly getting black windows on your local machine then you probably need a better processor or more Ram or you need to not run so many things on the pc at once.

Sockets/Ports waiting - As you run scrapebox, windows opens up connections and closes them.  The is true for almost every part of scrapebox, its how windows works.  If you have too many connections going and windows doesn't correctly close down the open connections they can stay open until the TCP timer shuts them down.  If this happens, windows will open all of the available ports that it can allocate.  Meaning all ports are open, not being used and you can't do anything.

When this happens the ScrapeBox window can be moved around and everything appears normal, but the numbers of posting/scraping etc... don't go up, they just stay at what they are.  This behavior is especially true with free proxies and more prevelant in machines running versions of windows prior to Vista Service Pack 2, including XP.  Although Vista and 7 are still susceptible to this.

The fix here is to turn your connections down and your timeouts up, or if it never kicks in, then restart your pc.

White Ghost Window/Not Responding - There is a white ghost window over top of the scrapebox window and it says (Not Responding) at the top.  This generally means that scrapebox is just busy and is not currently able to display the window.  Most of the time if you leave it alone and give it some time, it will come back and everything will be fine.

If leaving it alone isn't fixing the problem then you can check out the following reasons that its freezing.  Here are the most common ones:

  • Malformed data
  • Antivirus / Firewall Software
  • TCP stack
  • Resources

Malformed data

This includes anyfiles you load into scrapebox. If you have malformed urls, such as:

http:/ww.go/bad-data

or if you have poorly spun text.

{anchor1||anchor2|{

This could be in any of the fields.

Checking for Malformed data

Back to basics - Create a new names file, new emails file, new comments file, and new websites file.  Put only one line of text in each file. Do not use any spun text.  If you are able to comment to the same list that was freezing before then its likely it was somewhere in your names/emails/websites/comments files.  If scrapebox still hangs then its likely that the issue is in the blog list you are trying to comment to and you need to check for bad urls.

URLs - Uncheck the "Randomize comment poster blogs list" under settings. Then post with scrapebox. When it freezes note the position where it froze. Like list status 500/4563. Then also note your connections settings, for instance if you had 50 connections going at once.

Then import your list into the blog commenter section and hit the E. Then look for line 500 or wherever the list stopped at. Then look at 50 ulrs before and after that line (or whatever your connections were set to). If you find any malformed urls fix or delete them.

Malformed urls can be anything that is not formatted right.  Like: http:/ww.go/bad-data

Spin Syntax - Load your files in the commenter section.  Hit test comments.  Hit the spin again button 20-30 times rapidly.  If scrapebox locks up you likely have bad spin syntax coding.  Note that just because it doesn't lock up in this step doesn't mean that your spin syntax isn't the cause of scrapebox freezing and you should manually double check it.

AntiVirus / Firewall Software

Antivirus and Firewall software can shut down scrapeboxes access to the internet and cause all sorts of issues.  The simple solution here is temporarily disable all Antivirus and Firewall software including windows firewall and see if solves your problem.

TCP Stack

Half-open connections limit can cause an issue.  Generally this is only true if you are running Windows Vista SP1 or older, including XP.  The default half-open connections limit was removed by Microsoft in Windows Vista SP2 and newer, including Windows 7.  If it’s a half-open connections problem, Windows Event Viewer will show an event saying "TCP/IP has reached the security limit imposed on the number of concurrent TCP connect attempts".  If this is the case you just ran out of available connections.  The quickest fix is to restart your pc.  If you keep having this issue, then try running less simultaneous instances of scrapebox or turning your connections down and your timeouts up.  Upgrading to Vista SP 2 or newer will ultimately fix this problem.

Resources

Lack of resources can also cause this.  Not a fast enough processor, not enough RAM.  Quick fixes here are to not run as many instances of scrapebox at once, not run as many other things in the background while you are running scrapebox.   Turn your connections down, and your timeouts up.  Don't work with as large of a list.  Most of what scrapebox does is stored in memory(RAM) and the larger the list you work with the more space this is going to take.  If the memory fills up, things can go bad.

You can of course always upgrade your processor/RAM.

Back to Basics

In general its important to remember the basics, especially when none of the above works.

When all else fails

You can either contact support: http://www.scrapebox.com/contact-us

or

Take a nap.

or both.  🙂

Scraping not working or returning no results

If you are scraping a engine, especially Google and you are not getting any results, its generally due to one of a couple reasons.

1.) The terms you are scraping simply do not have any results to return.  For example if you manually go to Google.com and search for:
inurl:car inurl:house inurl:cat site:purple.com purple cows

Google is then going to return this:
Your search - inurl:car inurl:house inurl:cat site:purple.com purple cows - did not match any documents.

So if you put that same string into scrapebox its not going to harvest any results either.

2.) Your proxies are blocked or some other error is happening.  The easiest way to check this is go to the settings menu.  Uncheck "use multi threaded harvester".   Then try to harvest.  Scrapebox will display each query, the proxy used and the result, including any error messages.

If you see lines that say:
Results 0 completed using proxy xxx.xxx.xxx.xxx:xxx   - then it means that the request finished, but the engine returned no results, same as it would do if you actually went to the engine and manually searched for it.

If you see:
Results 0 Error xxx received using proxy xxx.xxx.xxx.xxx:xxx - Then you can look at the error message and generally determine the problem.  Most common for this one are:

Error 302 - Your IP is blocked
Error 404 - the proxy is bad or was never found
Error 407 - your proxy need authentication, if your using private proxies you need to get with your provider to see if you need to use IP authentication and how to do it, or if you need a username and password

Error 999

When trying to access a Yahoo service (i.e. Yahoo Mail, Yahoo Groups, Yahoo News, Yahoo Search or Siteexplorer), you receive the following error message:

Unable to process request at this time — error 999 Unfortunately, we are unable to process your request at this time. We apologize for the inconvenience. Please try again later.

Operating System: Any.

Background: This error appears to be a “catch-all” error code that Yahoo serves up when it doesn’t have a more specific error code. It essentially means “Oops! Something went wrong but we don’t know what, so we’ll just say that Error 999 occurred.”

The most common reason for receiving Yahoo Error 999 is due to some sort of bandwidth limiting system that Yahoo has put in place on their servers. Once you have exceeded your allotted bandwidth for a specific period of time Yahoo gives you this Error 999 message and doesn’t allow you to access the service. People have primarily reported receiving this error when they try to access Yahoo Mail or Yahoo Groups, but other Yahoo services may also be affected.

Why has Yahoo done this? There are two reasons that I can think of:

1. To prevent DoS (Denial of Service) attacks.
2. To stop automated tasks from hammering their servers with hundreds of requests a second.

There are many programs around that offer to automate access to various Yahoo services, i.e. check your Yahoo mailbox every 5 minutes, archive Yahoo Groups messages, download files from the Yahoo Groups Photos and Files sections, etc. If you use one of these automated tools then there is a very real possibility that you will run into the Error 999 message. Normal human usage of the Yahoo services shouldn’t normally generate enough traffic to trigger the Error 999 message unless you’re a very heavy user.

It appears that Yahoo uses your IP address to track the amount of traffic you’re generating on Yahoo, and once you reach the limit you get blocked by the Unable to process request at this time — error 999 message. Once triggered you will find that your IP address has been blocked for a period of time, somewhere between 2 and 24 hours usually.

What is Error 17?

Error 17 occurs when the checksums don't match. This essentially means that the file is corrupt and you need to redownload it. This can happen when you download in the first place, if it gets corrupted in transfer, or while unzipping etc... Windows does many things in the background and sometimes this just happens. Simply redownload a fresh copy of scrapebox from the download link:
http://www.scrapebox.com/payment-received

If the problem persists then you should try using a different unzip program, you can google for this, there are many free ones.

What is Error 18?

If you receive error 18 it means that a debugger is running when scrapebox is started.  Please close down the debugger and then start scrapebox.

I submitted for a license activation or transfer and its been over 12 hours and its not active.

99% chance you submitted the wrong info.  Make sure you submit the correct Paypal transaction ID.  You can find your transaction ID in paypal, here is a video that shows you how:
http://www.youtube.com/user/looplinescrapebox#p/u/9/k8PWovAQaiU

Also make sure you submit the email address that the money was sent from.  It is the email that you received the "thanks for purchasing scrapebox" email too.  Also it is the email address that paypal has labeled as "primary" in your paypal account (unless you have changed your primary email address since you purchased).

Else if you submitted the correct info then it could be that a anti-virus or firewall is blocking scrapebox from reaching the authentication servers to activate your license.  Go to help >> test server connection and see if you have 6 green lights.  If you do not have all 6 lights green then you need to find out what is blocking scrapebox.  Make sure you add allow rules in all of your anti-virus/malware checkers/firewalls for scrapebox.

What security programs are known to cause issues with Scrapebox?

There are many anti-virus, malware checkers and firewalls on the market, but some are particularly problematic and if your having issues with scrapebox connecting to the activation servers and/or your server test has red lights and you have one of these programs running you should pay particularly close attention to it cause it could very well be the cause of the issue.

Anti-Virus:
ESET/Nod 32
AVG

Fast and Slow Poster Error Codes

Login,Failed - This means that the blog requires you to login in order to post a comment.  Since Scrapebox does not support this, the post failed.

 

How to run Scrapebox with Bitdefender

Bitdefender 2012 and possibly some other versions, is very weird in how it works.  You can whitelist and even disable bitdefender, but it will still block scrapebox.  You have to turn the toolbar off in bitdefender and whitelist scrapebox in bitdefender.

Under "settings / privacy control". Make sure "Show Bitdefender Toolbar" is OFF.  Also then make sure scrapebox is whitelisted.

What does Socket Error mean in an addon

A socket error occurs generally when something forcibly closes the connection, it could be a firewall, bad proxies etc.

Firstly try it with and without proxies.  Addons only pull proxy data from scrapebox on startup, so if the use proxies box is checked or unchecked on startup and any proxies loaded, will only be refreshed in the addon if the addon is restarted.  So try unchecking the use proxies box and restart the addon, does it still happen?

If so then it’s probably something firewall or antivirus related.

Its basically process of elimination.  If it happens without proxies its probably local, if it happens only with proxies, then its probably a proxy issue.

How to fix the error MIG_NO_SERVER in Scrapebox?

This happens when something interferes or completely blocks the connection while it's connecting during the first run after an activation or transfer. This is a local issue on your PC.

Did you unzip it from the zip file? Also are you running it in SandBoxie, a VM, in DropBox, from an external or network drive or perhaps are using proxy software to run it through a proxy? Or is there anything unusual about the network or environment you are running it on?

If so you need to run it from the desktop from the machine it's activated on, and ensure it's unzipped and nothing can interfere with its connection such as not running it in DropBox, not routing it through proxies, not running it on a network or external drive etc.

What does Completed But Error or Completed, But Error mean?

It means the request worked (200 OK response) and didn't encounter any error like 302, 301, 501, 404 however results could not be found.  Generally this happens due to a search engine changing their format and a marker can not be found.

Check to make sure you are using the latest version of Scrapebox and/or the addon/plugin that you are getting the error in, as often times it is already fixed in a new version.

 

ScrapeBox cannot run from this location(out of an archive or temp folder)

When you get the error "ScrapeBox cannot run from this location(out of an archive or temp folder)" it means that you are trying to run Scrapebox from a zip file archive.

Scrapebox comes in a compressed zip file, which you then need to extract.  Depending on the version of windows you are using, you can usually double click on the scrapebox.zip file that you downloaded from the site, then a window will open.  Inside the window you will see the scrapebox.exe file, and at the top of the window you will probably see a button to extract or extract all.

You need to click the extract button and then extract the contents of the zip file to a folder that is on your Desktop or in your Documents folder.  Do not place scrapebox in a folder that is directly on the C root drive or in any "program files" folders.  Also do not place the scrapebox.exe file directly on your desktop - when you run the program it will download needed files and folders and place them in the same folder that you place the scrapebox.exe file in, if you place it on the desktop it will place all these files and folders also directly on the desktop.  So place it in a Folder that is on your desktop or in a Folder that is in your documents folder.

If you are still having trouble then  you can google for "how to extract or unzip files on windows X"  replace the X with your version of windows.  Also you can download free 3rd party tools that will extract files, you can google for these as well.

After its extracted you need to run scrapebox.exe and then click activate, enter your information and then click submit.  It will then take up to 12 hours for your license to be activated.

Why Does Rank Tracker Show A Different Rank Then What I See In A Browser Or Elsewhere?

Rank checking is a general gauge at best.

Engines show different results to different people for different reasons for the same exact keyword/search query.  Ill talk about Google as an example just because they are the largest.

Useragents/Browsers:
You can literally take the same machine and install multiple different browsers and load google.com in each of them and search for the same exact query and get multiple different sets of search results back.  Sometimes they can vary dramatically, 1 set might have domains that another set doesn't even show, all on the same pc.

IPs:
Different IPs generally will see different sets of results, and not just for local type searches, in general.  IPs in the same state/country can show considerably different results and as you change from country to country you might be looking at 100% different results for the same query.  In Rank Tracker you always want to try and use proxies or an IP from the same country as your potential viewers would be from.

10 vs 100 and being logged in:
Even simple things like changing results from 10 results per page to 100 results per page can show dramatically different results.  Also being logged into google can show different results.

The Big Picture:
You can't know what your end user will be using in the way of browsers, how many results per page, their IP and other such factors that all affect what they see when they search.  If you take 1 rank tracking tool using 1 user agent and ip and X results per page and then look at it vs Scrapebox you can easily see a variance in your rank and yet further variance if you look at it in a browser.

The most important thing here is there is No Wrong Answer. If your browser shows 1 position for a term and your site and Rank Tracker shows another that doesn't mean either is wrong or broken, it means that google is serving up different sets of results.   If anything its helpful to see this as your customers might see what Rank Tracker sees and not what your browser sees.

You could literally take 50 people and set them up different and have all 50 people search the same thing and get 50 different sets of results back, some of which may include your results in positions all over the map and others which won't include your site at all.  This is how google works.

So its helpful to be aware that rank tracking is a loose idea of how your site and SEO/Link Building is doing over all, but its not an exact science as google is not consistent in SERPs for all people.

Why when I click the manage button in the proxies section does Scrapebox freeze up?

Symptom:  When I click the manage button in the proxies section, Scrapebox freezes up.  I have to force close it by killing the process via task manager.

Solution:

It could be corrupt files, so you can try a fresh download.

You can download scrapebox from here:
http://www.scrapebox.com/payment-received

Unzip it to a folder on your desktop or in your documents folder.

It does popup a new window so make sure you don’t have a popup blocker

Also it could be that the window is pushed behind the main Scrapebox window.  So you can try moving the window out from behind the main window using the keyboard.  If you don’t know how to do this you can google how to do it for your version of windows.

Lastly it could be that security software is stopping the window from popping/locking Scrapebox when tries to pop the window.  So make sure Scrapebox is whitelisted in all anti-virus, malware checkers and firewalls.

Unhandled Archive Type

This just means the zip file is corrupt, such as the download timing out, no write permissions making a 0 byte file, security software scanning and locking the file interrupting the download, etc...

To solve this what you  do is delete Scrapebox (assuming your getting this when you are trying to Install Scrapebox)  and retry with a fresh download.

OR

Try going to Options > Select Download Server, and selecting server #2 instead.

OR

Try whitelisting the ScrapeBox folder in all security software to ensure nothing is blocking it, and ensure ScrapeBox is in a folder on the desktop or a location with write permissions (such as the a folder in the documents folder).

Why are my results not showing properly for my given google.tld in rank tracker?

Google has made changes that affect Rank Tracking

On October 27th 2017 google made changes that will affect rank tracking in Scrapebox Rank Tracker, if you are tracking google ranks.

The basics is that Google used to let you input any google you want, like google.co.uk or google.fr and it would give you results from that google. So if you used google.co.uk you get UK results and google.fr would give you results from France, including local results.

That is no longer the case. Google now gives you results based on the geo location of your IP, REGARDLESS of what google you choose.

So if your IP is in Paris, France and you go to google.com and type in food. You will now get results from Paris, France. If you go to google.co.uk, you will still get results from Paris France.

So the google you choose is irrelevant now, they are only returning results from the ip you choose. So you can go to any google around the world and get the same results.

You can read more about it here:

https://www.theverge.com/2017/10/27/16561848/google-search-always-local-results-ignores-domain

and

https://productforums.google.com/forum/#!topic/websearch/AzcsFmuFPEg/discussion

What this means for Rank Tracking

While you can still manually adjust your location results in a browser, this uses javascript. Scrapebox uses raw sockets and threads, as these provide many advantages, however they do not support javascript.

So the only way to get localized results in the Scrapebox Rank Tracker, is to use an IP from the local you want.

So if you want French results you need a IP from France, if you want German Results you need an ip from Germany etc..

Yes I agree, google is just making it more and more difficult for all of us.

Proxies

I have had some people ask where they can get various ips from specific countries. I wrote up a review of a service that generally can give you fixed ips from a given country (as long as they have ips from that country) and they have a lot of different countries.

The ips stay fixed for the life that you pay for them. Meaning if you buy them and keep paying for 10 months you keep the same ip for 10 months. So once you set it up you don't have to keep messing around with it. For the private proxies you "can" request a new ip every 30 days, but by default the ips stay fixed.

I also asked them to make a discount for my readers and they did. Further note, this is a good discount, which is why this is NOT an affiliate link:

http://scrapeboxfaq.com/squid-scrapebox-proxies-review-and-information

Discount Code: loopline20

 

How to use

Unfortunately the below applies to Scrapebox for Windows Only, this method will Not work on the mac version of Scrapebox, due to how mac works.

If you are wondering how to make rank tracker use different ips for different projects, there is no way to do that specifically in Rank Tracker.

So the way you would do it is to simply run multiple instances of it.

So say you want to rank track from 3 different countries. You setup 3 different scrapebox folders, 1 for each country.

Install rank tracker in each of them and then setup a given countries project(s) in that. So we could do

Google France

Google USA

Google Germany

Setup a folder for each and then in the rank tracker that is attached to the Google France one, you put in the French IP(s).

For the Google USA, you put in the USA ips and for the google Germany put in the German ips.

Then when you run each, it will be using its own ips. Here is a video on how to setup more then 1 instance:

https://www.youtube.com/watch?v=aZzdE6ybu38

Why when I click stop the stop and start buttons grey out but nothing stops and I have to force close with task manager - aka locked threads?

That means that something has locked 1 or more of the threads.  This can be security software such as  anti-virus, malware checkers and firewalls.   So you should whitelist scrapebox in all security software and then you can whitelist the entire scrapebox folder as well.

Further any program that accesses the internet can lock threads, things like skype, utorrent etc…  So you can try closing down any unneeded programs.  Then if its working you can turn programs back on 1 by 1 to find the culprit.

Further computer optimization software can lock threads so you can shut any such software down.

Take note: that disabling security software (such as anti-virus, malware checkers and firewalls) often only stops new rules form forming, but allows existing rules to still fire.  So you have to fully whitelist in the security software or uninstall the security software(as a test).

Further some security softwar requires you to whitelist in more then one place before it takes effect.

Also note that disabling a router firewall, does actually fully disable it.

Basically you have to sort out what is locking the threads, because scrapebox is forced to wait until all threads are released.  On occasion it can be your operating system that does it, so you can try restarting your machine and/or lowering total connections.

One other thing to note is that this can happen with proxies that keep returning small amounts of data, it won't trigger the timeout because teh connections is still active.  So try a test using no proxies or make sure you are using some quality private proxies.

Lastly if your running mac, you can try lowering the connections.  Mac has terrible error handling when it comes to lots of errors stacking up quickly.  So if there are too many errors stacking up too quick mac can choke, so lowering the threads fixes this.  This is a non issue on windows.