For a long time, Scrapebox has been one of the best tools available for marketers of all stripes. It’s extremely powerful, with a ton of modules and plugins for pretty much anything you could want to do. Black hat marketers use it, white hat marketers use it, and everyone in between. If you need mass data in any situation, Scrapebox is one of the best ways to get it.
For years, though, there’s been just one problem: Scrapebox is a native Windows-only application. It’s difficult as a developer or small development team to build something that support all the different software architectures out there. It’s even harder when the program, like Scrapebox, takes advantage of advanced processor threading and other hardware features you don’t generally get on something like a Mac.
In recent years, though, Apple has starting moving more and more towards a PC-like architecture for their systems, and as such, the ability to utilize that hardware has become easier. That’s why on the 7th of March this year, Scrapebox announced on Reddit that they are making an official Mac release.
This is great news for Mac SEOs and marketers. Oh, they’ve been using Scrapebox this whole time, of course. The primary method of choice is to use the software in a virtual machine emulating a Windows environment. It’s slower and less efficient than running natively on a Windows box, and running a VM uses up a lot of system power, but it’s still better than what is otherwise natively available on Mac.
What Scrapebox for Mac Gets You
If you haven’t used Scrapebox before, you might think I’m exaggerating when I say how much it can do. In actuality, it’s an extremely powerful information harvesting engine, with a ton of plugins and add-ons to use.
Scrapebox on its own has a handful of primary features that appeal to any marketer. Some of them are a little on the black hat side, but others are perfectly benign, and just like a weapon, usage makes the hat.
- Search engine harvesting allows you to harvest search engine results pages for keyword lists or URLs quickly and easily. It works with Google, Yahoo, and Bing, as well as dozens of others, like BigLobe, Ask, Search.com, and even specific Google services like Maps or direct API access.
- Keyword harvesting allows you to plug in one keyword and gather autocomplete, suggestion, and extension long-tail keywords to research for your own SEO efforts.
- Proxy harvesting allows you to search for free and known “private” proxy lists to make a list of your own, complete with testing to make sure they meet your standards for location and latency. It can even filter for proxies with specific ports, specific countries, or specific speeds.
- Comment posting allows you to, well, let’s not mince words; it’s a comment spam engine. I can see a few narrow uses for it that aren’t spam, but it’s a tool designed for link building via blog comment sections. You can also make it post comments on your own site, to make your community look larger. A little deceptive, but not strictly black hat.
- Link checking allows you to pull a backlink profile and check them to make sure the links exist, or scan your site for links and check their status, anchor text, and parameters.
After all of that, you have additional modules you can add.
These are upgrades to the framework that allow you to do a lot more.
- An anchor text scan module that lets you scan a domain for any instance of your backlink, and what the anchor text of that link is. It will also give instances of redirects, errors, and other issues with the link.
- An Alexa checker will scan a list of URLs you input for their popularity and reach on Alexa. You can use a list Scrapebox harvested, or one of your own.
- A simple alive or dead scanner will check a list of URLs to see if the page exists or if it’s dead.
- An article scraper that can harvest the text of articles on a side, based on links or keywords. This alone is edging towards black hat, but can be used legitimately to study word usage and other content analysis factors.
- A backlink checker that allows you to plug in a URL and see how many links it has, as well as downloading the top 1,000 links according to their quality rating. This one integrates with Moz and requires a Mozscape API key.
- A broken link checker that allows you to plug in a list of URLs and check to see if they’re broken or not, if they redirect, if they 404, and other possible issues. This is useful for auditing links or for broken link building and expired domain registration.
- A bulk domain checker that takes a list of domains and pulls information about them, primarily regarding the location of the server based on IP. You get the IP, the country code, country, state, city, latitude, longitude, and any server codes. Of course, a lot of times this will be server farm information, but it can still be useful.
- A page authority scraper that, like the backlink checker, uses a Mozscape API key to use a list of domains of your choice to pull page authority, domain authority, MozRank, number of external links, and server status. This is extremely useful for checking the value of your backlinks.
- A Google Cache extractor, that pulls the date and time of the most recent cache Google has made for a page. This can help you identify trends in Google indexing, among other things.
- A competition finder, that shows you the number of results in Google search for a given keyword. This will show you in broad strokes how popular various keywords are in comparison to one another, though it doesn’t show you volume or the strength of existing content. Still, it’s a good basis for research.
- A Google image search scraper that pulls the images that show up for a given keyword. To avoid issues with licensing, you can even tell it to find only images licensed under creative commons, or any of the other supported Google filters.
- A link extractor that pulls external and internal links on a given page and shows you how many of each there are.
- A malware addon that scans a list of URLs you input and looks for malware and the status of the site, including historical information if a site has been flagged before.
- A bulk URL shortener, that you can use to plug in a list of URLs and get short versions from TinyURL and a handful of other shorteners, with the ability to add more to customize the shortlinks you create.
- A page scanner that checks to see what framework or source code the site uses, similar to a “builtwith” site, to identify if a page is WordPress, PHPBB, VBulletin, or another common framework. Custom code throws it for a loop, though.
- A rapid indexer that helps ensure prompt site indexation by creating pages on various third party services for your site, which can be used for reputation management and indexation.
- A sitemap scraper that looks for sitemaps and extracts the links from them to get a complete picture of any website that has a sitemap.
- A social checker that searches for URLs and pulls their follower count or comparable metric with Facebook, Google+, LinkedIn, Pinterest, and other sites. It can show followers, shares, likes, and other numeric metrics.
- A TDNAM closeout addon that searches for ending domain name auctions and shows their age, traffic, and price as well as the end time of the sale.
- A vanity name addon that looks for a username on various services, like Wiki, LinkedIn, Tumblr, and YouTube, searching to see if the name is available or taken.
- A WHOIS scraper that pulls the domain name, TLD, proxy, registration, expiration, name, email, phone, and other information about the owner on record of the sites in your list. Useful for contacting site owners, or for sniping expiring domains.
- A dupe removal tool that allows you to remove duplicate pages or files on your site.
- A link status checker that checks your link list to see if the links are followed or not.
- A link validator that checks your list of URLs to see if the links exist or not.
- A social account scraper that allows you to plug in a list of domains and searches for their social media presence, giving you links to their profiles if they can be found.
And those are all just the free add-ons! There are also paid plugins, which include a host of gray and black hat tools like article spinners, expired domain registrations, yellow pages submitters, and more. Most of them are $20 each, or $97 for all five of them.
Now, all of this is well and good, but there are a few problems with the Mac version of Scrapebox.
The first problem you can see on that Reddit thread up above, if you still have it open. Scroll down a bit and you see someone asking when the version will be ready, and you see the Scrapebox rep mention that it’s still early enough in development that they don’t have a release date (as of Mar 23, 2017). They don’t say anything about transferring licenses from people who already have it, either. I’m sure more information will be available when the version is released, but as of yet, no such information is available.
Another possible issue is what comes up with the different way PCs and Macs handle software. One common thing PC users do with an app like Scrapebox is running more than one instance of the software. In doing so, they can run multiple scans at once, which obviously takes more system resources, but allows more than one type of operation at a time. This might not be available on the Mac version.
Likewise, there are a lot of automation options and third party plugins that work fine with the PC version of the software, but which might not work on the Mac version. Whether they do or not will depend on how much the developers try to keep the two systems working in the same way. I would assume they are focusing more on their core functionality, possibly without all of the add-on modules above even, before they start working on APIs and third party support. Your favorite options might not be available.
Speaking of third party integrations, Scrapebox works well with a lot of other applications, like the GSA suite and other sorts of scrapers. Unfortunately, like Scrapebox at the moment, many of these are also Windows only, which means even if Scrapebox works natively on Mac, it won’t be able to integrate through a VM to these other applications.
My recommendation is to stick with the Windows version for now and don’t get too excited about the Mac version, at least for the time being. It pains me to say it, as a Mac fan and user myself, but you can’t deny that there are limitations to the platform. I’ll wait and see if the new Scrapebox version works as well or better than the Windows version, and if it does, I’ll happily switch.