The following formats can be used:
If Url|Username|Password is used, a random proxy from the “Proxies and Settings” tab will be used when posting articles to blogs. If IP:Port or IP:Port:User:Pass is added, then this proxy will be “bound” to that specific account and will always be used for logging in and posting articles to that particular blog.
You can have a mix of accounts without any proxies and with proxies, account with proxies assigned will have that bound proxy used as priority else it will just use a random one from the proxies and settings tab
The parameter is "automator:<path to the automator file>" for example scrapebox.exe "automator:C:\Users\Administrator\Desktop\ScrapeBox\1.sbaf"
You can also chain multiple jobfiles to run one after the other for example:
scrapebox.exe "automator:C:\Users\Administrator\Desktop\ScrapeBox\1.sbaf" "automator:C:\Users\Administrator\Desktop\ScrapeBox\2.sbaf"
This will run ScrapeBox and execute the first Automator jobfile, and when it’s complete run the second jobfile.
You can launch a scrapebox automator job file via command line so it will launch and start the job file instantly.
You would use a shortcut with a target that looks like this:
Or you could go to Start > Run and enter it, or CMD Prompt and enter Start and the path. Or you could even use a jobfile step in one instance to start another instance and job with the start external exe command in automator.
In the automator 2.0 you can chain jobs together, so that after 1 job finishes it will automatically execute the next job, and so on.
So you can do like:
scrapebox.exe "automator:E:\Projects\ScrapeBox V2\Win32\Favorite Automator Jobs\1.sbaf" "automator:E:\Projects\ScrapeBox V2\Win32\Favorite Automator Jobs\2.sbaf"
You can do this from a command line or you can create a shortcut to Scrapebox and include it in the shortcut.
***Unfortunatley Tumblr removed the ability to post via email, which is what scrapebox used. So at this time Tumblr posting is no longer supported***
Tumblr supports markdown when posting with the article scraper plugin, as the article scraper posts via email. More info on tumblrs email support here:
Here is more info on Markdown:
Markdown supports two styles for creating links: inline and reference. With both styles, you use square brackets to delimit the text you want to turn into a link.
Inline-style links use parentheses immediately after the link text. For example:
This is an [example link](http://example.com/).
<p>This is an <a href="http://example.com/"> example link</a>.</p>
Optionally, you may include a title attribute in the parentheses:
This is an [example link](http://example.com/ "With a Title").
<p>This is an <a href="http://example.com/" title="With a Title"> example link</a>.</p>
Reference-style links allow you to refer to your links by names, which you define elsewhere in your document:
I get 10 times more traffic from [Google] than from [Yahoo] or [MSN]. : http://google.com/ "Google" : http://search.yahoo.com/ "Yahoo Search" : http://search.msn.com/ "MSN Search"
<p>I get 10 times more traffic from <a href="http://google.com/" title="Google">Google</a> than from <a href="http://search.yahoo.com/" title="Yahoo Search">Yahoo</a> or <a href="http://search.msn.com/" title="MSN Search">MSN</a>.</p>
The title attribute is optional. Link names may contain letters, numbers and spaces, but are not case sensitive:
I start my morning with a cup of coffee and [The New York Times][NY Times]. [ny times]: http://www.nytimes.com/
<p>I start my morning with a cup of coffee and <a href="http://www.nytimes.com/">The New York Times</a>.</p>
You can increment file names that are saved by the automator. Meaning if you want to save your harvested urls to a file, you can call it harvested.txt. Then if you loop for example, on the next round you can call a batch file from the automator that will rename that round to harvested1.txt.
You would edit this as need be and save it as a batch file:
if exist %destination%\%filename%_%a%.txt set /a a+=1 && goto :loop
copy %location% %destination%\%filename%_%a%.txt
What it will do when its run is create a copy of the file harvested.txt and rename it to harvested_1.txt and next time it’s run it will create harvested_2.txt and so on. So you can call it from the Execute external app step, also this can be used for any feature that exports a file not just for harvested urls.