100’s Of Links/Hour Automated - Introduction To Black Hole SEO
I really am holding a glass of Guinness right now so in all the authority it holds…Cheers! I’m kind of excited about this post because frankly it’s been a long time coming. For the last 7-9 months or so I’ve been hinting and hinting that there is more to Black Hat than people are willing to talk about. As “swell” as IP delivery and blog spam are there’s an awesome subculture of Black Hats that takes the rabbit hole quite a bit deeper than you can probably imagine. This is called Black Hole SEO. By no means am I an expert on it, but over the last few years I’ve been getting in quite a bit of practice and starting to really kick some ass with it. In the gist, Black Hole SEO is the deeper darker version of black hat. It’s the kind of stuff that makes those pioneering Black Hat Bloggers who dispel secrets like parasite hosting and link injection techniques look like pussies. Without getting into straight up hacking its the stuff black hatters dream about pulling off, and I am strangely comfortable with kicking in some doors on the subject. However lets start small and simple for now. Than if it takes well we’ll work our way up to some shit that’ll just make you laugh its so off the wall. Admit it, at one point you didn’t even think Advanced SEO existed.
In my White & Black Hat Parable post I subtly introduced this technique as well as the whole Black Hole SEO concept. It doesn’t really have a name but basically it follows all the rules of Black Hole SEO. It targets sites on a mass scale, particularly scraper sites. It tricks them into giving you legitimate and targeted links and it grabs its content on an authoritative scale (will be explained in a later related post). So lets begin our Black Hole SEO lesson by learning how to grab hundreds of links an hour in a completely automated and consenting method.
Objective We will attempt to get black hat or scraper sites to mass grab our generated content and link to us. It’ll target just about every RSS scraper site out there, including Blog Solution and RSSGM installs including many private scrapers and Splogs.
Methodology 1) First we’ll look at niche and target sources. Everyone knows the top technique for an RSS scraper is the classic Blog N’ Ping method. It’s basically where you create a scraped blog post from a search made on a popular Blog Aggregator like Google Blog Search or Yahoo Blog Search. Then they ping popular blog update services to get the post indexed by the engines. For a solid list of these checkout PingOMatic.com. Something to chew on, how many of you actually go to Weblogs.com to look for new interesting blog posts? Haha yeah thats what I thought. 90% of the posts there are pinged from spam RSS scraper blogs. On top of that there’s hundreds going in an hour. Kinda funny, but a great place to find targets for our link injections none the less.
2) We’ll take Weblogs.com as an example. We know that at least 90% of those updates will be from RSS scrapers that will eventually update and grab more RSS content based upon their specified keywords. We know that the posts they make already contain the keywords they are looking for, otherwise they wouldn’t of scraped them in the first place. We also have a good idea of where they are getting their RSS content. So all we got to do is find what they want, where they are getting it from, change it up to benefit us, and give it back.
3) Write a simple script to to scrape all the post titles within the td class=”blogname” located between the !– START - WEBLOGS PING ROLLER — comments within the html. Once you got a list of all the titles store it in a database and keep doing it infinitely. Check for duplicates and continuously remove them.
4) Once you got all the titles steadily coming in write a small script on your site that outputs the titles into a rolling XML feed. I know I’m going to get questions about what a “rolling XML feed” is so I’ll just go ahead and answer them. It’s nothing more than an xml feed that basically updates in real time. You just keep adding posts to it as they come in and removing the previous ones. If the delay is too heavy you can always either make the feed larger (up to about 100 posts is usually fine) or you can create multiple XML feeds to accommodate the inevitably tremendous volume. I personally like the multiple feed idea.
5) Give each post within the feed the same title as you scraped from Weblogs. Then change the URL output field to your website address. Not the original! Haha that would do no good obviously. Then create a nice little sales post for your site. Don’t forget to include some html links inside your post content just in case their software forgets to remove it.
6) Ping a bunch of popular RSS blog search sites. The top 3 you should go for are: Google Blog Search Yahoo News Search Daypop RSS Search
This will republish your changed up content so the RSS scrapers and all the sites you scraped the titles from will grab and republish your content once again. However, this time with your link. This won’t have any affect on legitimate sites or services so there really are no worries. Fair warning: be sure to make the link you want to inject into all these Splogs and scraped sites as a quickly changed and updated variable because this will gain you links VERY quickly. Lets just say I wasn’t exaggerating in the title A good idea would be to put the link in the database, and every time the XML publishing script loops through have it query it from the database. That way you can change it on the fly as it continuously runs.
As you’ve probably started to realize this technique doesn’t just stop at gaining links quickly, it’s also a VERY powerful affiliate marketing tool. I started playing around with this technique before last June and it still works amazingly. The switch to direct affiliate marketing is easy. Instead of putting in your URL, grab related affiliate offers and once you got a big enough list start matching for related keywords before you republish the XML feed. If a match is made, put in the affiliate link instead of your link and instead of the bullshit post content put in a quick prewritten sales post for that particular offer. The Black Hat sites will work hard to drive the traffic to the post and rank for the terms and you’ll be the one to benefit.
Each individual site may not give you much but when you scale it to several thousands of sites a day it starts really adding up quickly. By quickly I mean watch out. By no means is that a joke. It is quick. There are more RSS scraped pages and sites that go up everyday than any of us could possibly monetize no matter how fast you think your servers are.
Comments (359)
These comments were imported from the original blog. New comments are closed.
hi eli,
how do you do your nr3 script?
Is it enough to change the positions of the words, or should i kill some letter out of the title
like “this title is best” kill the “t” and results in “his ile is bes” ?
Uff, for a noob this is too complicated. Can someone write a script which does that all those steps automatically, along with a visual interface guiding you through it?
I didn’t understand half of what you posted lol
I’d like to someone to write a script for me that does this, then install it on my website for me.
Then Id like that person to come over and wipe my ass for me: two wipes in a clockwise direction.
Gimme a break buys! Start learning how to do this simple stuff yourself and stop depending on the charity of others!
“Then Id like that person to come over and wipe my ass for me: two wipes in a clockwise direction.”
LOL, Lol, lol….
Nice post.
This is a really stupid question but what language does the script need to be in?
Guys,
No one is going to hand over this script to you. This is valuable stuff. If you don’t know how to code it, pay someone to do it for you. But you can’t expect everything to be handed to you on a silver platter.
Haha yeah, I wouldn’t call it very blackhat. After all, you are basically targeint gpsmmaers, so who cares?
Eli…when you did this with affiliate links, what were the profits like?
Hey Eli,
Couple of clarifications. What are you feeding the scrapers for content? I know you are giving them the weblogs titles, but where is the actual content they are scraping off us coming from ? Are you saying one sales copy for all posts, just with different titles?
Also, how many of these scrapers actually link back to the original article.. I don’t see that too often.
http://www.ilovejackdaniels.com/regular_expressions_cheat_sheet.png
Look at pattern modifiers. /i and /m are your friend. I like to use /ims at the end of each one.
If i understand correctly, the idea is to create a RSS feed in xml for a non-existing blog, but that will contain instead keyword-targeted text with links to your affiliates links? Then ping everything to get your feed scraped and used on blackhat sites, thus having your content published by blackhat people?
Writing a dynamic RSS feed is a piece of cake, if you need to get started you can look at the tutorial:
http://www.icemelon.com/tutorials.php?id=3&/PHP/Generate%20RSS%20Feeds/
Then we go over a technique called link baiting. Hey guys! The script for this is at my blog!!
Very nice tutorial Eli.. Maybe you should just hint at stuff from now on because I am sure this is going to get quite a rush in the next few days. Actually though, I don’t know how many people will go through with the program.
I think that’s the case with a lot of what Eli posts… very cool ideas that LOTS of people get all hot and bothered about… I suspect that very few readers actually get to the stage of coding and launching many of these ideas though for one simple fact… some effort is required.
That’s great news though, because the few of us that are actually trying and expanding on the ideas Eli is giving us will have less competition
Thanks again Eli for another awesome post!
Tyler, where is the script on your site? I couldn’t find it.
Thanks.
It was never there.. It was a joke.
By the way everyone.. From my calculation there are about 30 blog posts/second.. This is going to destroy my server lol..
I’m going to have to agree with most everyone on here.
If you didn’t entirely understand the post then don’t attempt it. There will be more posts in the future with lots of fun stuff you can try out. Just let this one go until you’re ready for it.
People with the regex question, it depends on what language your using but you will have to do a multiline match as well as accept multiple matches(usually m/ and /g) and put those matches into either an array or a scalar.
Don’t use regex to parse html/xml, its way to hard and breaks all the time.
Use python, BeautifulSoup, is amazingly esay and works, now with that said, wow this is such a cute fun project it can be done in a simple shell script using the usual suspects, gred/sed/curl/lynx/etc
To get your targets, why not try
grep ‘.xml”‘ shortChanges.xml | \ grep -i “(xbox|game|wii|psp)” | \ sed ’s; url=”.*” ; url=”http://my1337.com/rss.xml” ;’ | \ sort | \ uniq
As far as the weblogs example goes you can use their changes log. It’s about 2mb for every 5 minutes.
http://rpc.weblogs.com/shortChanges.xml
I shouldn’t have to say this, but Right click, Save Target As (Save Link As in FF).
Hi everybody. Hi Eli. I’ve just grabbed and analyzed a bunch of titles of recently updated blogs. I got it from http://blogsearch.google.com/changes.xml?last=60 Below is what I got:
BLOG_TITLES_START weston Database for Research Grants and Contracts justin http://denshi.hitchart.com/u.r/denshi/RQ2 My Wheels weston Real estate exam maryland Jason Bartholme’s SEO Blog х_+ф║
Hi Eli,
What format are your feeds in? RSS, RSS2, Atom?
Eli: I found a much easier way of doing this which I won’t post here. E-mail heading your way in a few minutes
Jason
Hi Eli,
I would like to read about promoting your site through social networking sites. How exactly to go about it. You have covered this is various posts as tit-bits but I would like a single big easy to understand one. In a non-technical way ofcourse
John,
I am emailing you from your post at Blue Hat SEO.
You mention that when you are finished building your code that you would be willing to sell a copy.
I’m interested in being put on a List — when you email me if you use “Blue Hat SEO” somewhere in the title — I will be able to find it and respond quickly.
Warmest Regards,
Jeff — 21world@bellsouth.net
Hi John
Im interested in the script too
Good if you would put me on your list and email your paypal details
Kind regards
Mark
Eli:
I just sent you that e-mail
Jason
Great post! I’ve yet to come across another blogger who’s giving away tricks and tips like these
Anyway, I was wondering: at step 3 you’re talking about grabbing and storing all the post Titles from weblogs.com. Maybe I’m missing something here, but doesn’t weblogs.com only list the general -blog- titles?
Thanks for such a valuable resource Eli. This blog just gets better and better.
One question: Maybe I’m missing something here, but could a “poor man’s ” way to do this be just to download http://rpc.weblogs.com/shortChanges.xml, find and replace everything between the url=”" tag and ping all the above mentioned blog resources?
Perhaps I’m missing where the actual blog post comes from.
thx.
Well, Weblogs.com is now worthless. Everyone and their brother will be using those titles. Time to figure out how to get the info from the other ping services.
My bg this is I’m trying to figure out how to filter out crazy titles that are not in english.
If you want to get really fancy and even targeted, write a way to categorize the post titles and then target links / title that are relevant.
I think I know what you mean, and I borrowed some code from a “website generator” to help with this process. Let’s just say the posts read a little “differently” now.
Of course, I used less authoritative content…
Alright, here is the moment that you have all been waiting for
I have released my automated version of this to the public on my blog. Here is the link:
#EDIT BY ELI: LINK REMOVED UNTIL I SEE A COPY.
Jason
Hi Eli,
is it a good idea to combine this method with your network idea?
like one script for collection all titles and for each network site a ping script with their own rss feed
Black hole SEO
Following up on yet another silly phrase made up by Mr Bluehat, I’ll tell you how to do black hole SEO in another way. You know what else people scrape a lot? Search engine results. So how would you go about making people scrape SERPS that inclu…
“Hey Eli,\n\nNice post! Some questions thinking about Google and contents being related.\n\nAs always if I am wrong or not getting it right, let me know please.\n\n1 - Wouldn´t you filter those titles in order to make them related to the content you will use in your XML feed? (Relevancy)\n\nWould be great to parse having in mind the niche you want to target (this would imply we will be using multiple resources to scrape titles and get some matches related to our content). This way, you get links from scraper-made websites targeting specific niches.\n\n2 - “The switch to direct affiliate marketing is easy. Instead of putting in your URL, grab related affiliate offers and once you got a big enough list start matching for related keywords before you republish the XML feed.” (Quality)\n\nWe talked a bit about this on your last post where you introduced the idea.It seems we will get tons of incoming links fast but these are splogs as you said (poor quality). Seems ideal for heavy linkspamming processes and short term affiliate revenue. Not that much for websites with long term aspirations.
In your last post, you mentioned that websites, blogs, etc could scrape contents from white hats defending their position against BH (including links to their websites) or unwillingly insert links to banned domains thus decreasing their value as a source of incoming links for the BH webmaster (as a matter of fact this was the only part of your post I had some doubts about cus since white hatters are getting links from there also…aren´t they harming theirselves at the same time? Why not just insert those banned domains and let BH get horrible links that harm their rankings solely?)
As you can see, my doubt is always revolving around the benefits of these fast link building schemes when it comes to SEO projects. Link velocity will be great but what about results in the long run. What will prevail? What do you think based on your experience?
Keep it up!:)
Nick
“Heya Nick, Great comment/questions as always. I’m glad you post them as comments so everyone can read them.\n\nFirst your first question, I build links for volume. I build links for relevancy. Just by personal policy I never mix the two. Simply because obviously whenever you try to do both one, the other, or both will naturally suffer. So I always try to do each to it’s maximum capacity in a separate manner. It hasn’t failed me yet.\n\nHowever in this case, if you were to do the affiliate offer rather than going for the straight link building than definitely. A reader already commented on this on the follow up post. He solved the targeted traffic problem by gathering tons of affiliate offers, making a solid list of keywords for each one, than attempting to match each possible title with an affiliate offer. I would stretch that one step further. I would put a priority on the affiliate offers. So if a possible match could be made, than i would insert the affiliate link instead.If no match could be made than I would use the title get an inbound link. Kind of like a Link Laundering technique on steroids.
I think i just accidentally answered your second question. Indeed we did talk about it. With the same intent as before, I would use these links for traffic, or use them for SEO purposes. Mixing them is fine, but do it tactfully and like you said targeted. These are not the highest quality links in the world but many will have solid link authority because the owner may drive some massive amounts of deep links to his pages through link bombing practices. More often than not, these link bombing techniques will involve gathering relevant links. So even though his page may not be relevant to your site, it can still pass good authority, thus boosting your rankings.
So don’t focus on being worried about building links too quickly. It seems logical that search engines would think about this and consider it a bad thing, but its simply an impossibility to make\applicable without drastic consequences. Take for instance the presidential race going on right now. Each candidate has a brand new website. Instantly over night they all got absolutely massive amounts of links, most completely irrelevant and from random blogs and sites that have nothing to do with their subject. Can you see a single site that doesn’t rank for its terms. Even the celebrity candidates toppled down everyone whos been in long standing already. Gaining links too quickly is never as much of a problem as gaining links too slowly. Although if you are the type that spends 7 hours a day hitting refresh on the results page for your terms, than sanity may be a factor.
I think its a damn shame that people here are yet to truly understand and realize the power of my old Synergy Links post. If you completely ignore the entire technique itself and strictly comprehend that it is entirely possible to change the relevancy value of a group of inbound links, into a high quality and\relevant link the SEO world is your oyster.
Here’s how you can create hyperlinked keywords from the short change weblogs file on the command line.
wget http://rpc.weblogs.com/shortChanges.xml
cat shortChanges.xml | grep “weblog name” | perl -ne ‘/”(.*?)”/;print “$1\n”;’
and the result … Dear God Part Two My blog 710 trip, etc. Demetrius
Dave
What exactly is that cat going to do for you ?
/me spent waaaay too many hours on #sed,#awk and #grep
;)
I use this to filter out those characters:
$val = iconv(”UTF-8″,”UTF-8//IGNORE”,$val);
it does the trick.
Reformed Adult Webmaster Reveals Cutting-Edge Marketing Secrets
Cutting edge internet marketing secrets revealed for your home business.
Introduction To Black Hole SEO
You’ve got to love a guy that reveals the deep secrets of SEO in step-by-step detail.
…
Hi Eli,
First of all - thanks for an awesome blog. I’ve learned a lot from you! If you ever come to Copenhagen, I’ll be the first to take you out for a beer and some hot Scandinavian chicks :p
Anyway, I got an answer:
Anyone can of course feel free to answer..
Hi,
I’ve been scraping and pinging for two days now.
For some reason I can’t ping Yahoo - I get a 403 when I use the regular ping, and I can’t auto post a form to them due to their beacon cookie shit.. Anyone got a solution?
So I’m left with DayPop and Google Blogsearch.. however, Im not seeing my links in their index yet.. Am I just impacient or doing something wrong?
Thanks
hey Eli,
Wonder in the SEO World….
I am able to populate the Database every 10 minutes with the Title and URL from http://rpc.weblogs.com/shortChanges.xml
Now my doubt is, Do i need to go to each and every URL of the corresponding title to retrieve all the post titles??
Hi Decipher,
I was wondering the same thing. It seems, however, that if a blog post has a title (or if the blog template shows the post’s title as the page’s title, or if the services the owner pings somehow manage to get the title) then these XML feeds will contain the title of each post. If not, then they’ll contain the title of the blog.
I noticed that it’s got quite a few private myspace blogs (definitely not splogs, or they wouldn’t be private) and quite a few posts that only contain the blog title rather than the post’s title.
Well, guess it’s up to each of us to optimize our scripts accordingly….
Good luck.
Eli,
help on a few things. Forget about the technical part.
http://www.newsmob.com/step2.php?id=2761632 http://mortgage-rates.jeremymorgan.com/2007/05/27/uncategorized/katrina-still-bad-for-business-in-pass-christian/
hey Eli,
Again good job with another very intriguing idea. I’ve got a question if you don’t mind clarifying. I understand that you scrape the titles and place them in the db. When you republish the XML feed, what do you use for content?
Hi very cool post. However i wondered about one thing:
“So to keep in the up and up the Black Hatters always be sure to include proper credit to the original source of the post by linking to the original post as indicated in the RSS feed they grabbed. This backlink slows down the amount of complaints they have to deal with and makes their operation legitimate enough to continue stress free.”
I get that this whole concept is predicated on the black hatter/rgssm linking back to the source. But do they really do this? I was of the opinion that they wouldn’t care if you get credit or not, and your unlikely to find out if you do steal your content so why would they?
Thanks again
Hey, great post.
One question. What would you recommend putting in the main title, link, and description tags of the channel element of the RSS feed? Should it point to the site being promoted, as well?
Also, instead of submitting to those three linked-to blog search sites, would it work better just to ping pingomatic?
It’s me again with a couple more questions.
Firstly, how often would you recommend pinging those blogsearch sites?
Secondly, with the multiple feed idea, what would be the sense of that? wouldn’t it just result in a lot of stale data as the old feeds are no longer updated and only new ones are generated?
I appreciate any replies to the above.
Hi,
Yeah I realize that. I meant getting the content from weblogs.com and pinging the suggested places like Google’s blog search, etc, inaddition to pingomatic? Would that work? They do have an RSS URL option.
I’ve implemented this and it’s been running for a few days now. Every 5 minutes it downloads the latest titles and puts 100 of them into a database, which is read by an RSS file. That script also pings about four different places. I don’t really know how it is working yet.
Hi Eli, Great post and blog thanks.
I have one question about this method.
It relies on just pinging your xml feed will get you ranked in google and yahoos blog search so the scrapers scrape your links.
Surely it takes more than a ping to get ranked?
Thanks
Leon
Great post, I will definitely try it. I just have one question. The rolling RSS file that is produced - does it just contain the titles and links that have been saved or does it also contain post content?
Basically, are you just producing the same file that you parsed with your links injected
Well, its 2 months on now. I’m sure most of you who did it are now listed in feeds. Any comments on how well it worked or didn’t? I’m curious.
comments would be welcome. I’ve run it for a few days now, and google,etc is picking up the posts, but I wont know if any splogs are picking them up for quite a while.
Well I’ve got my scripts set up and running… just a few quick, general questions:
The scraper sites re-scrape blog posts if posted with the same title?
How long does it take on average for the aggregators (after pinging) to index your “posts”?
The content for each of my posts is the same, is that a problem?
Eli–how in the world did you think of such an intricate strategy? Love the site.
-Drew
this is a great idea… i love it. i want to join to Andrew questions.
I also want to show you a little something that i built. it uses this system, you can use it with your own site.. at the moment - it’s not sending the pings, only give you the xml file (it looks static, but it’s dynamic and takes random titles every time), so you have to send the pings.. so you have to send the pings your self.. i’m working on that right now the only trick is… well, i take 5% of the links if that’s okay with you, feel free to use it
http://exe.netzach.biz/sys/
btw, sorry about my english, i know it sucks.
good day, Nadav
UPDATE: It sends it to Ping-O-Matic, Google Blog Search and Daypop RSS Search, and I’m working on Yahoo.
Please give it a try and tell what you think
Nadav
Nadav
It seems like you generate posts based on the same 5 articles over and over. Try a few sample links and you’ll see what I mean when you open the xml file.
This site is great, been lurking for a while and decided to give this one a go.
Regarding the “rolling XML feed�?. So I have written some code to do automate this, but due to the volume I am creating a new feed file for every 100 entries. My initial reaction was to create the new file, ping the services, and delete the file (to save disk space). Does this feed file need to exist for a period of time after the ping for the services to process it or does it get processed when the ping is submitted?
Thanks for all the great info - keep it coming!
Eli, Holy crap man, this post at first blew my mind! But being a perl freakazoid, I decided to tackle this one, to see what I could do with it. So yesterday, I whipped up some code, tested it out, and finally implemented it last night. Like I said, holy crap! The results were pretty immediate, in a place I would never have expected.
I am looking over my stats for today, and I the first things that is popping out at me so far are the search terms that resulted in hits! My site is all music related, so I was struggling to develop that portion, and up to now, it was only music terms. But today, I am getting hits from search terms like “Ron Paul”, “My Braces” and “download partition magic”. Crazy! The only thing that changed was to implement your idea!
On a side note, when I create the feed, I do a rough filter so at least get into the ball park of relevancy.
Thanks a million times over man!
I just started the whole process and some important questions come up. I’m reviewing my server logs after I pinged blogsearch.google.com and seeing an interesting picture. The bot came a few times, grabbed my rss feed and suddenly stopped comming for more no matter how many times I pinged google.
I pinged a dozen of times over the hour only.
Second, do I have to change the description of my sales copy for every in my xml feed?
Is there any danger of duplicate content in the feed when the same description repeated over and over again? Or do I have to use kind of markov chaines for every description?
Does my site have to be a blog to provide rss feeds to aggregators or it can be an ordinary site?
email me host:hush.com user:mr_man
if you want a free version of the script that does this.
I’m with Ozzmo, someone holler at me. I want to pay money to get someone to start implementing these tactics and get a better grasp of this type of approach.let’s get ‘er done.
yahoo user:cjayhey
I just wrote an implementation of this a few days ago… not sure it’s exactly the way Eli prescribes but it works damn well… hundreds of pages found in Google for a distinct blurb I inserted into all my rss entries.
The thing that really gets me (other than how fast it works) is that the splogs really do link to me, and of all the one’s I’ve looked at, only about 5% use no follow… crazy.
Thanks Eli!
Hi Eli,
Im now energised to learn php and dbase after reading your site. Now the typical thing WH’s and BH’s will ask…
Do you use this method to build backlinks for proper whitehat sites or your throw away domains?
I love your site! Thanks for the great information. I am seriously so new to this, so I have one general question. If I want to learn programming and eventually get to a point where I can read this post and implement what is being said, what is the best place to start? Any particular books, sites, ect?
And I will pay someone for the script mentioned. Please email me at zenterprises19 at yahoo.com with details. I can use paypal or another method if you prefer.
thanks!
This is the most “i finally found some cool trick”-feeling site I came across in a while.
I certanly do want such script to and am willing to pay. Email me at host:gmx.net user:natadd
I’m willing to pay for this script also.
Email me at sunsolutions02 [at] yahoo.com
Let me see if I’ve got this straight then:
Read http://rpc.weblogs.com/shortChanges.xml
Extract all the URLs and read each one, looking for any RSS feeds on the page.
Read each of those RSS feeds and scrape the ’s.
Build your own feed with those titles and ping it to google blog search etc.
Anything else I need to know?
Step 3 had the html stripped. It should say:
Thanks for this post!
It was easy to read and comprehend, and I hope to implement it very soon and enjoy the results!
Thanks Eli!
Hi.
I do not understand why I always get an empty response on the google ping changes xml at http://blogsearch.google.com/changes.xml
If I do a “HEAD” to get the headers on the same url I get “502 Bad Gateway”. Do I need to issue a POST request somehow or what ?
I don’t get it. Because I’m outside of U.S. or what?
Kindly
//Marcus
Guys isn’t Google smart enough to know that gaining 100-1000 links per day or hour, is seemed as spam and will derank you faster than a gerbil crawling through Richard Gere’s asscrack?
Sorry for bumping this, but your site will go through the black hole if you get links too fast.
I remember reading a bit about Google’s search patent in which they explain the factors that determine ranking, and speed of gaining backlinks is one of them.
This doesn’t mean gaining backlinks quickly will have a negative effect - the moment it does, as Eli pointed out in another post, the rules of the game will change because you could poison your competitors by rapidly gaining crappy backlinks to their sites.
Clearly rapid backlink growth will not give you the same advantage as slow and gradual growth, but you’d still be better ranked than sites that have only a few backlinks.
hi, like obviously many others, i still don’t understand where the content is coming from? do i have to scrape it too and save it on my site (stealing content), or do i just link to it, and put the links to mysitetopromotedotcom into the title? my email: hammer(at)web(dot)de i would be thankful for any explanation of this issue.
also i’m interested in buying a working script in classic asp or php… thanks for this great blog. i love it… peter
I am going to run my feed from mysql db and pullout random results on every requests. I have started working on script and will keep it on sale once I finished. So the script is going to give me backilnk and few bucks.
by the way, out of box idea Eli, Hats off !!!!
This is nо spam.
Its very funny site.
http://vidpicture.blogspot.com/
Update every day.
Excellent!
I just found this script and will give this a shot.
Going through the script to understand what it does.
A++++
G.
This is quite a bit out of my league and I don’t think I’d ever try something like this but the posts read well and was entertaining.
Do you recommend doing something like this with brand new sites?
“I gave this a shot so I thought I’d share my results. Not the same as those of others who have tried it, I warn…\n\n1) Getting the blog titles: You don’t need to go to the blogs to fetch the post titles. This is very time consuming. I tried it and you won’t get very many titles this way. You can get the titles of the blogs directly off of the home page at http://www.weblogs.com/. There are 10 (if I recall correctly) and they are all inside
RSS files: I maintained 24 rss files. I updated one per hour so each file remained unchanged for a whole day. Once per hour I would go through step 2 until I found 100 titles. I filtered out obscene stuff. You may wish to do the same. I would set the title field to the one I scraped from weblogs.com, the description field to the ClickBank product description. The link would be my cloaked CB hot link.
Results: I ran this for 6 days then stopped. I omitted 16 January as it was a partial day. These are the dates and unique hosts for the day period: Date Unique Hosts 17 95 18 49 19 25 20 20 21 31
As you cansee this is not server-busting traffic. The feed crawlers account for about 12 to 15 of the hosts. Which means that I got anywhere from 5 to 80 visits a day. As these results do not square with the experiences of others I would appreciate if any errors could be spotted.
Software used: Linux, perl, mySQL.
Thanks,
Peter
Of course google is cracking down on this stuff, they always crack down on black hat/hole. Thats why you pump as much traffic/links as possible before they shut down your site/loophole.
Quantity ftw when it comes to black techniques.
This method will only work for 37 days, 4 hours, 16 minutes and 33 seconds from the time you posted this.
Better hurry.
Eli,
I am a newcomer to your Bluehatseo blog and I’m blown away! My knowledge of seo is sketchy to begin with, and what I know of “blackhat” doesn’t really go any further than xss, blogfarms, and cloaking. I really appreciate you–and everyone who comments–for taking the conversation as far as you have away from the mainstream.
By the way, in one of the other posts or comments, you mentioned a coder you trust. I’ll find that post again, eventually, but any chance you or one of the other reader remind of the name? You identified the person as tobcn or something like that. Thanks!
I take my (black) hat off to the guys out there who can make sense of this and make it work for them.
No doubt most of you are less than half my age to add insult to injury
Can you send me the code of the script to use ?
Thanks in advance.
Mike
this can be done without a db, just store the titles in an array. go through the scrape a few times until you end up with 100 or so titles (after removing duplicates and foreign) then create a static xml file, based on the time or date, then ping your favourite search engine with the file path.
i have my script wait 10 seconds in between scrapes and does it 3 times, grabs about 30 titles each time it runs. so i end up with a static rss feed, 30 posts long. not bad for about 50 lines of php and no DB!
cheers Eli.
p.s heres some regex for the weblogs titles… ‘/” class=”pingLink”>\s(.*)/m’ just remove the rubbish around the title with a couple of str_replace’s or email me for the weblogs.com scrape and ping script, inf@disfo.org
This method no longer works. I’ve been able to ping Google, Technorati, Ping-O-Matic and IceRocket. I’ve had 0 luck getting more links, in fact I even lost a link. The bad thing about coming up with great ideas, such as this one, is that once everybody starts doing it, it winds up getting “patched”. I find it very hard to drive traffic to my sites, and I’ve been trying many methods with no luck. Legit SEO is VERY time consuming, while using a more “automated” approach seems ideal. I realize that automating SEO is frowned upon because everybody would be doing it. However, there’s GOT to be a way to bring traffic to a site, and make a few bucks off of google ads. Even sites with “real” content, not generated content, are not coming up in the first couple pages of the search engines. So, if somebody can be a visionary for a SEO strategy that will work, PLEASE contact me. I’m an idea man, but this time I’m out of ideas.
Matt sinack @ mail DOT ru
I’ve only recently found this blog and find it hard to believe just how advance people like Eli are.
Awesome!
Hi, I try to figure out what this tool is for, but can’t see any info about to tell me all I want to know… is someone use this? Please tell how it work.
Regards
this is amazing, really blew my mind! i hope it still works as i’m gonna have a go at implementing this soon.
great site Eli!
Very interesting idea! http://www.love2u.ro
regards!
Anyone actually try this?
I have done as described and it quickly, I mean QUICKLY, ate up all my bandwidth. I had the RSS XML feeds on a server that was being trashed with all the bots coming to check out my updates. I can not comment on increasing links, but I definately saw a spike in traffic to the pages (on a different server) I linked to in the XML which therefore increased my adsense earnings. So I would like to keep this going.
Any feedback regarding how to create and ping the XML without making a house payment to pay for the bandwidth?
Just get on a hosting plan with better if not unlimited bandwidth.
I have only just started this strategy, so still have to find out the effect. I have pinged around 500 new xml feeds already. Reread the article and I found I forgot to add html links to the descriptions. Will do that on my next batch!
Great blog with great ideas. I wish I was a programmer. But alas.
The dumb thing is that once these techniques are published they don’t work anymore after a given time, due to all the followers trying it out
So you’re beating the sploggers and scrapers at their own game, eh?
Clever.
Hey Eli,
Love the technique, followed it enough to do it on my own. I got as far as creating a php script that get the titles of the posts on weblogs.com, made the rss feed to go with it, got the RSS pinger ready, only thing is my XML feed rarely validates because so many of the titles contain ‘illegal characters.’ I’ve tried using functions like php’s htmlentities but no matter what I do I get this message when I try to validate the rss:
XML parsing error: :12:75: not well-formed (invalid token)
Thus my feed will rarely get spread around because it doesn’t validate, right?
Any suggestions?
RSS Feeds giving me trouble too -
XML parsing error: :12:75: not well-formed (invalid token)
Free dinner for whoever fixes this!
Here is the php code I use, not 100% but better than nothing:
$urls = $doc->getElementsByTagName( “weblog” ); foreach( $urls as $url ) { $EntryName = $url->getAttribute( ’name’ ); $validtext = TRUE;
// test for ASCII Characters higher the 127 $test = explode(’ ‘, $EntryName ); foreach($test as $s) { if (ord($s) > 127){ $validtext = FALSE; break; } }
if ($validtext) { $modEntryName = str_replace("’", “”, $EntryName); $mod2EntryName = str_replace("»", “”, $modEntryName); writeLink($mod2EntryName, $link); } }
Because some will do and most of them won’t. So if you are doing this on a big scale, this will still work out for you. Plus you can include a big
This Content Belongs To MYsite.com
in the end of each post, so atleast if you dont get the link back you sure will get publicity and probably good hits.
I found this script at esrun’s blog but I am too thick to get it going right. Can’t get the pinger to work and it is a bitch to install. Definitely one for the pros.
I wish someone would take some pity on us less advanced tech guys and really dumb down the whole process of installing and executing the script.
Thanks! This is perfect to start with. Anyone know howlong this will work?
wholesale electronics
Nice article, but I didn’t understand something.
After you create your own rss feed that links to your affiliate pages for example, why would the rss scrapers keep the link to your site?
I thought that these people are only interested in the CONTENT of your RSS, so once they copy your content, why should they link to the scrapped content source?
Thanks Idan
Google like RSS, Backlinks, Page Rank Button **** etc.
@seoeoeo
I think that you would be able to tell once you start getting the backlinks and/or traffic from the sites scraping your feed.
@eli
Once you have this setup do you ping the appropriate servers from your server? In your ‘Blog Ping Hack’ post you said that you avoid pinging from your server at all costs, so would you handle this job the same way as the Blog Ping Hack? Would you recommend combining the two techniques?
Hi Eli I got it finally what is actually written in the post.
I made a desktop application using the steps, will be testing it out from this weekend and let all know what are the results. Thanks for the amazing post again.
If any one wants help in what and how you should be doing, I will be glad to help. contact me( megachamp(at)megachamp(dot)com).
Thanks
Nice posting! great article. i learn many things from this article.
Thanks
Excellent!
I just found this script and will give this a shot.
Going through the script to understand what it does.
These feeds are actullay your content representatives, which are easy to access, manage and published on other blogs, websites and content feed with an ease of small RSS feed setup…
hard disk recovery
I’ve been meaning to mention this site for a while now, but never did have the chance to get to it…
data recovery
Black hole seo, Sounds quite interesting
Btw great post Eli