Scrapebox Premium Plugins

What formats for proxies can be used in the scrapebox article scraper plugin?

The following formats can be used:
Url|Username|Password
Url|Username|Password|IP:Port
Url|Username|Password|IP:Port:User:Pass
If Url|Username|Password is used, a random proxy from the “Proxies and Settings” tab will be used when posting articles to blogs. If IP:Port or IP:Port:User:Pass is added, then this proxy will be “bound” to that specific account and will always be used for logging in and posting articles to that particular blog.
You can have a mix of accounts without any proxies and with proxies, account with proxies assigned will have that bound proxy used as priority else it will just use a random one from the proxies and settings tab

How Do You Execute A Scrapebox Automator Job File Via Command Line?

V2:
The parameter is "automator:<path to the automator file>" for example scrapebox.exe "automator:C:\Users\Administrator\Desktop\ScrapeBox\1.sbaf"

You can also chain multiple jobfiles to run one after the other for example:

scrapebox.exe "automator:C:\Users\Administrator\Desktop\ScrapeBox\1.sbaf" "automator:C:\Users\Administrator\Desktop\ScrapeBox\2.sbaf"

This will run ScrapeBox and execute the first Automator jobfile, and when it’s complete run the second jobfile.

 

V1:
You can launch a scrapebox automator job file via command line so it will launch and start the job file instantly.

You would use a shortcut with a target that looks like this:

"C:UsersLooplineDesktopScrapeBoxscrapebox.exe" "C:UsersLooplineDesktopproxies.sbj"

Or you could go to Start > Run and enter it, or CMD Prompt and enter Start and the path. Or you could even use a jobfile step in one instance to start another instance and job with the start external exe command in automator.

Automator 2.0 - Chaining Jobs Together

In the automator 2.0 you can chain jobs together, so that after 1 job finishes it will automatically execute the next job, and so on.

So you can do like:

scrapebox.exe "automator:E:\Projects\ScrapeBox V2\Win32\Favorite Automator Jobs\1.sbaf" "automator:E:\Projects\ScrapeBox V2\Win32\Favorite Automator Jobs\2.sbaf"

You can do this from a command line or you can create a shortcut to Scrapebox and include it in the shortcut.

How to post links with Tumblr in the Article Scraper Plugin in 1.x and 2.0 versions of Scrapebox?

***Unfortunatley Tumblr removed the ability to post via email, which is what scrapebox used.  So at this time Tumblr posting is no longer supported***

 

Tumblr supports markdown when posting with the article scraper plugin, as the article scraper posts via email.  More info on tumblrs email support here:

https://www.tumblr.com/docs/en/posting#emailpostheader

Here is more info on Markdown:

Links

Markdown supports two styles for creating links: inline and reference. With both styles, you use square brackets to delimit the text you want to turn into a link.

Inline-style links use parentheses immediately after the link text. For example:

This is an [example link](http://example.com/).

Output:

<p>This is an <a href="http://example.com/">
example link</a>.</p>

Optionally, you may include a title attribute in the parentheses:

This is an [example link](http://example.com/ "With a Title").

Output:

<p>This is an <a href="http://example.com/" title="With a Title">
example link</a>.</p>

Reference-style links allow you to refer to your links by names, which you define elsewhere in your document:

I get 10 times more traffic from [Google][1] than from
[Yahoo][2] or [MSN][3].

[1]: http://google.com/        "Google"
[2]: http://search.yahoo.com/  "Yahoo Search"
[3]: http://search.msn.com/    "MSN Search"

Output:

<p>I get 10 times more traffic from <a href="http://google.com/"
title="Google">Google</a> than from <a href="http://search.yahoo.com/"
title="Yahoo Search">Yahoo</a> or <a href="http://search.msn.com/"
title="MSN Search">MSN</a>.</p>

The title attribute is optional. Link names may contain letters, numbers and spaces, but are not case sensitive:

I start my morning with a cup of coffee and
[The New York Times][NY Times].

[ny times]: http://www.nytimes.com/

Output:

<p>I start my morning with a cup of coffee and
<a href="http://www.nytimes.com/">The New York Times</a>.</p>

From https://daringfireball.net/projects/markdown/basics

How to increment files that are saved by the automator?

You can increment file names that are saved by the automator.  Meaning if you want to save your harvested urls to a file, you can call it harvested.txt.  Then if you loop for example, on the next round you can call a batch file from the automator that will rename that round to harvested1.txt.

You would edit this as need be and save it as a batch file:

set destination=C:\test
set Location=C:\test\harvested.txt
set Filename=harvested
set a=1

:loop
if exist %destination%\%filename%_%a%.txt set /a a+=1 && goto :loop
copy %location% %destination%\%filename%_%a%.txt

What it will do when its run is create a copy of the file harvested.txt and rename it to harvested_1.txt and next time it’s run it will create harvested_2.txt and so on. So you can call it from the Execute external app step, also this can be used for any feature that exports a file not just for harvested urls.