Enable javascript in your browser to view an important message.

Wednesday, 16 November 2011

What is Bot Net?

A botnet (AKA a zombie army) is a number of Internet computers that, although their owners are unaware of it, have been set up to forward transmissions (including spam or viruses) to other computers on the Internet. Any such computer is referred to as a zombie - in effect, a computer "robot" or "bot" that serves the wishes of some master spam or virus originator. Most computers compromised in this way are home-based. According to a report from Russian-based Kaspersky Labs, botnets -- not spam, viruses, or worms -- currently pose the biggest threat to the Internet. A report from Symantec came to a similar conclusion.

Computers that are coopted to serve in a zombie army are often those whose owners fail to provide effective firewalls and other safeguards. An increasing number of home users have high speed connections for computers that may be inadequately protected. A zombie or bot is often created through an Internet port that has been left open and through which a small Trojan horse program can be left for future activation. At a certain time, the zombie army "controller" can unleash the effects of the army by sending a single command, possibly from an Internet Relay Channel (IRC) site.

The computers that form a botnet can be programmed to redirect transmissions to a specific computer, such as a Web site that can be closed down by having to handle too much traffic - a distributed denial-of-service (DDoS) attack - or, in the case of spam distribution, to many computers. The motivation for a zombie master who creates a DDoS attack may be to cripple a competitor. The motivation for a zombie master sending spam is in the money to be made. Both of them rely on unprotected computers that can be turned into zombies.

According to the Symantec Internet Security Threat Report, through the first six months of 2006, there were 4,696,903 active botnet computers.

Credits to:
http://www.readwriteweb.com/archives/is_your_pc_part_of_a_botnet.php
http://searchsecurity.techtarget.com/definition/botnet

Friday, 11 November 2011

Seven Poll Result Analysis





This picture below is previous week poll analysis. We are discussed the question of which bot does your know more about it. Most of the voters voted "Googlebot".

In the conclusion , there are 6 persons voted the poll, 3 persons voted "Google Bot.". This means that most of the people know more in google bot. 2 persons voted "chat bot", not much of voter vote about chat bot, because google bot are more popular that chat bot.And there are only 1 ppl know what is shop bot. For the know bot there are not people know about what is know bot, to look more information about know bot can get information fromw our blog. If you have any question u can send a mail to us to leave a comment here. Besides, we will always share more information about Web bot to you!

Thursday, 10 November 2011

Computer game bot

A bot, most prominently in the first-person shooter types (FPS), is a type of weak AI expert system software which for each instance of the program controls a player in deathmatch, team deathmatch and/or cooperative human player. Computer bots may play against other bots and/or human players in unison, either over the Internet, on a LAN or in a local session.Features and intelligence of bots may vary greatly, especially with community created content. Advanced bots feature machine learning for dynamic learning of patterns of the opponent as well as dynamic learning of previously unknown maps – whereas more trivial bots may rely completely on lists of waypoints created for each map by the developer, limiting the bot to play only maps with said waypoints. Using bots is incidentally against the rules of all of the current main Massively multiplayer online role-playing games (MMORPGs).

In Multi-User Domain games (MUDs), players may utilize bots to perform laborious tasks for them, sometimes even the bulk of the gameplay. While a prohibited practice in most MUDs, there is an incentive for the player to save his/her time while the bot accumulates resources, such as experience, for the player character.

Aim Bot
An aimbot (sometimes called "auto-aim") is a type of computer game bot used in first-person shooter games to provide varying levels of target acquisition assistance to the player. It is sometimes incorporated as a feature of a game (where it is usually called "auto-aim" or "aiming assist"). However, making the aim-bot more powerful in multiplayer games is considered cheating, as it gives the user an advantage over unaided players.

Aimbots have varying levels of effectiveness. Some aimbots can do all of the aiming and shooting, requiring the user to move into a position where the opponents are visible; this level of automation usually makes it difficult to hide an aimbot—for example, the player might make inhumanly fast turns that always end with his or her crosshairs targeting an opponent's head. Numerous anti-cheat mechanisms have been employed by companies such as Valve to prevent their use and avoid the accusations.

Some games have "auto-aim" as an option in the game. This is not the same as an aimbot; it simply helps the user to aim when playing offline against computer opponents usually by slowing the movement of 'looking/aiming' while the crosshair is on or near a target. It is common for console FPS games to have this feature to compensate for the lack of precision in analog-stick control pads.

Credits to:
http://gamebots.sourceforge.net/
http://mmohuts.com/review/bots

IRC BOT



What is an IRC bot?

An automated client:

To an IRC server, an IRC bot is virtually indistinguishable from a regular IRC client (i.e., a person using a program such as X-Chat, mIRC, or ircii). However, there's no person typing behind an IRC bot. It only makes automated responses, based on (usually) what is happening on IRC. An IRC bot can do things based on public messages, private messages, pings, or any other IRC event. But a bot isn't limited to the world of IRC. It can talk to a database, the web, a filesystem, or anything else you may imagine.

Examples:

Here are some common IRC bots that you may have seen in your travels already:

File serving: This type of bot emulates an FTP program by interfacing with a filesystem. Users talk to the bot using private messages with commands like "ls" and "get". The user can send and receive files using DCC (a part of IRC that allows the initiation of peer-to-peer file transfers).
Channel administration: This bot maintains a list of channel ops (people who run the channel) and makes sure they stay in control of it, even if individual people are disconnected. They may also kick people from the channel who violate its etiquette (e.g., talking in all caps, using colors, flooding, etc.)
Games: Some bots will allow the people in the channel to text-based games. We'll learn later how to program a trivia bot.

What you need to program an IRC bot

1. Perl
2. Net::IRC Module
3. IRC server

HelloBot
HelloBot is a greeting bot. When a user, let's give him the nick "Joe," joins a channels, HelloBot will say "Hello, Joe!" When Joe leaves, HelloBot will say "Goodbye, Joe!" (Of course, since Joe has already left, he won't see the message, but other users in the channel will.)

Reference:
Brian Seitz, What is IRC bot, Retrieve 11/10/2001
URL:


Tuesday, 8 November 2011

Knowbots

A kind of bot that collects information by automatically gathering certain specified information from websites. A knowbot is more frequently called an intelligent agent or simply an agent. A knowbot should not be confused with a search engine crawler or spider. A crawler or spider progam visits Web sites and gathers information according to some generalized criteria and this information is then indexed so that it can be used for searching by many individual users. A knowbot works with specific and easily changed criteria that conform to or anticipate the needs of the user or users. Its results are then organized for presentation but not necessarily for searching. An example would be a knowbot (sometimes also called a newsbot) that visited major news-oriented Web sites each morning and provided a digest of stories (or links to them) for a personalized news page.

Knowbots Information Service
The Knowbot Information Service (KIS), also known as netaddress, provides a uniform user interface to a variety of remote directory services such as whois, finger, X.500, MCIMail. By submitting a single query to KIS, a user can search a set of remote white pages services and see the results of the search in a uniform format.There are several interfaces to the KIS service including e-mail and telnet. Another KIS interface imitates the Berkeley whois command.

KIS consists of two distinct types of modules which interact with each other (typically across a network) to provide the service. One module is a user agent module that runs on the KIS mail host machine. The second module is a remote server module (possibly on a different machine) that interrogates various database services across the network and provides the results to the user agent module in a uniform fashion. Interactions between the two modules can be via messages between Knowbots or by actual movement of Knowbots.

Reference
Denis Howe, knowbots, Retrieved in 19 June 1999



Friday, 4 November 2011

New ZeuS bot could be antivirus-proof

A modified version of the ZeuS bot may have some appeal to cybercriminals due to its potential to thwart anti-virus software, a computer security firm disclosed.

Trend Micro said the variant, detected as TSPY_ZBOT.IMQU, uses a new encryption-decryption algorithm and makes it harder for anti-virus programs to clean its infection.

“If a machine is infected with ZeuS, calling (API GetFileAttributesExW) via a specific parameter would return with the bot information, which includes bot name, bot version, and a pointer to a function that will uninstall the bot. Antivirus software may utilize this function to identify ZeuS bot information and to clean ZeuS infection automatically. However, the new version of ZeuS also updated this functionality and removed the pointer to the bot uninstall function, thus, eliminating the opportunities for AVs to utilize this function," it said in a blog post.

Also, it said this new version showed current trackers may fail to decrypt its configuration file due to its updated encryption/decryption routine.

The new variant does not use RC4 encryption algorithm but an updated encryption/decryption algorithm instead, Trend Micro added.

We believe this is a private version of a modified ZeuS and is created by a private professional gang comparable to LICAT. Though we have yet to see someone sell this new version of toolkit on underground forums, we expect that we will see more similar variants which will emerge in the not-so-distant future,it said.

Trend Micro said the new malware targets a wide selection of financial firms including those in the United States, Spain, Brazil, Germany, Belgium, France, Italy, and Ireland.

More interestingly, it targets HSBC Hong Kong, which suggests that this new Zeus variant may be used in a global campaign, which may already include Asian countries, it said again.

It added the emergence of these latest ZeuS variants implies ZeuS is still a very profitable piece of malware and that cybercriminals are continuously investing on the leaked source code.

Credits to:
http://www.gmanews.tv/story/231604/technology/new-zeus-bot-could-be-antivirus-proof

Thursday, 3 November 2011

Sixth Poll Result Analysis


This picture below is previous week poll analysis. The tittle of the poll analysis is "Is the Web bot predict by tracing keyword relates to the word you want?". Most of the voters voted "Sometimes only".




In the conclusion , there are 7 persons voted the poll, 4 persons voted "Sometimes only.". That's mean in the most of the time, web bot keyword prediction is not always help for most of the people., 2 persons voted "Yes, Always get", it's actually web bot keyword prediction is also really helpful for some people.and 1 person voted "No.". If you have any question u can send a mail to us to leave a comment here. Besides, we will always share more information about Web bot to you!

Tuesday, 1 November 2011

Yahoo Bot



The Yahoo! Search engines like Yahoo! crawl through websites to gather information, returning a list of relevant websites to the user who types search terms into them. Bots return up-to-date information on currently active websites but have been known to return defunct websites.

Yahoo! Slurp
Yahoo! Slurp is the recognized name of the bot that the Yahoo! search engine uses. It's a different bot than Googlebot for Google and Bingbot for Bing. Yahoo! Slurp is based on Web crawler architecture developed for Inktomi. The Yahoo! Slurp bot crawls through websites and creates a virtual copy of them to be used later for the search engine.

Bot Tasks
The Yahoo! Slurp bot performs three basic actions for the Yahoo! search engine. First, it retrieves Web server website information. Then the bot analyzes the website content for relevant information. Finally, it files the information to be used in site indexing. The bot itself is little more than an automated script performing these three tasks over and over. However, it can do it at incredibly fast speeds. A Yahoo! search engine results page takes mere seconds to load. It wouldn't be possible without the bot.

Site Indexing
The process of ordering the websites that the Yahoo! Slurp bot has found is called site indexing. Websites are ranked according to their relevancy to search terms and their individual ranks within the Yahoo! search engine. The more popular and integral to the Internet the site is, the more rank it obtains and the higher it is when Yahoo! returns its search results. The Yahoo! Slurp bot helps index the sites for Yahoo!

How the Bot Searches
The Yahoo! Slurp bot uses four parameters when accomplishing its website information retrieval process. It operates using a selection policy that determines which Web pages to download; it uses a re-visit policy that determines when the pages are checked for updated information. The bot also operates by a politeness policy that prevents certain websites from being overloaded with searching. It also operates by a parallelization policy, which coordinates the Yahoo! Slurp bot with other Web crawlers.

Reference
Meg North,What Is a Bot on Yahoo?, Retrieve in 06 september 2011
URL:

Monday, 31 October 2011

Chat Bot

Chat robot, a computer program that simulates human conversation, or chat, through artificial intelligence. A "chatterbot" or "chat bot" is a bot whose purpose is to simply communicate with a client. From humble beginnings such as the early program "ELIZA," chat bots have grown in purpose and scope. Typically, a chat bot will communicate with a real person, but applications are being developed in which two chat bots can communicate with each other. Chat bots are used in applications such as e-commerce customer service, call centers and Internet gaming. Chat bots used for these purposes are typically limited to conversations regarding a specialized purpose and not for the entire range of human communication.
Windows Live Messenger, also known as MSN, is an instant messaging program which allows you to add and communicate with other users. A chat bot is a computer program that can be used with Windows Live Messenger; it registers the messages you send and simulates a human response, allowing you to carry out a realistic, interesting conversation.

References
Amber Viescas, eHow Contributor, What Are Web Bots?, Retrieve in July 25,2011
URL:

Nade Xro, eHow Contributor, How to Add Chat Bots on MSN, Retrieve in May 25, 2011
URL:

Friday, 28 October 2011

Web Bot Contest Answer is Released!!



Across
1. GOOGLE--A search engine spider
3. WEBBOT--Retrieves information from websites
4. INTERNETBOTt--A software application that does repetitive and automated tasks in the Internet
5. SPAMBOT--Send spam emails and typically work by automated
6. HOTBOT--Using the Inktomi database

Down
1. SHOPBOT--A special type bot that are use to check prices
2. SPIDER--Another name of web crawler
3. GOOGLEBOT--Crawling, Indexing and Serving is the key processes in delivering search result


For all the participants to our blog contest will stop at this moments. There are 4 participants in our contest. Thanks to those participants who joined our contest and congratulated to the those 3 participants that are win in our contest! The following name are the participant that win the prize in our conest is

1. Lee Horng win the first prize 4GB pendrive x1 !

2. Boo Kuok Chai win the second prize External USB hub x1 !

3. Gan Shi jet win the third prize Optical Mouse x1 !

Continue to follow latest Webbot information in our blogs, Twittle and Facebook page. Thanks!

Thursday, 27 October 2011

Fifth Poll result Analysis


The title of the poll last week is “Do you guys know what is Malware Bots?” We had created a pie for votes results.



The above figure showed that there are only total 2 votes. 1 person voted “Yes” and the other voted “I don’t know what is that.”

Conclusion, according to the chart above, only 1 person voted that he know about Malware Bots. Which means that there is people know what is Malware Bots. Lastly, we will continue to improve our blog and share more interesting information to the readers.

Web Scraping & techniques

Web scraping (also called Web harvesting or Web data extraction) is a computer software technique of extracting information from websites. Usually, such software programs simulate human exploration of the World Wide Web by either implementing low-level Hypertext Transfer Protocol (HTTP), or embedding certain full-fledged Web browsers, such as Internet Explorer or Mozilla Firefox. Web scraping is closely related to Web indexing, which indexes information on the Web using a bot and is a universal technique adopted by most search engines. In contrast, Web scraping focuses more on the transformation of unstructured data on the Web, typically in HTML format, into structured data that can be stored and analyzed in a central local database or spreadsheet. Web scraping is also related to Web automation, which simulates human Web browsing using computer software. Uses of Web scraping include online price comparison, weather data monitoring, website change detection, Web research, Web mashup and Web data integration.

Techniques for Web scraping
Web scraping is the process of automatically collecting Web information. Web scraping is a field with active developments sharing a common goal with the semantic web vision, an ambitious initiative that still requires breakthroughs in text processing, semantic understanding, artificial intelligence and human-computer interactions. Web scraping, instead, favors practical solutions based on existing technologies that are often entirely ad hoc. Therefore, there are different levels of automation that existing Web-scraping technologies can provide:

1.) Human copy-and-paste: Sometimes even the best Web-scraping technology cannot replace a human’s manual examination and copy-and-paste, and sometimes this may be the only workable solution when the websites for scraping explicitly set up barriers to prevent machine automation.

2.) Text grepping and regular expression matching: A simple yet powerful approach to extract information from Web pages can be based on the UNIX grep command or regular expression matching facilities of programming languages (for instance Perl or Python).

3.) HTTP programming: Static and dynamic Web pages can be retrieved by posting HTTP requests to the remote Web server using socket programming.

4.) DOM parsing: By embedding a full-fledged Web browser, such as the Internet Explorer or the Mozilla Web browser control, programs can retrieve the dynamic contents generated by client side scripts. These Web browser controls also parse Web pages into a DOM tree, based on which programs can retrieve parts of the Web pages.

5.) HTML parsers: Some semi-structured data query languages, such as XQuery and the HTQL, can be used to parse HTML pages and to retrieve and transform Web content.

6.) Web-scraping software: There are many Web-scraping software tools available that can be used to customize Web-scraping solutions. These software may attempt to automatically recognize the data structure of a page or provide a Web recording interface that removes the necessity to manually write Web-scraping code, or some scripting functions that can be used to extract and transform Web content, and database interfaces that can store the scraped data in local databases.

7.) Vertical aggregation platforms: There are several companies that have developed vertical specific harvesting platforms. These platforms create and monitor a multitude of “bots” for specific verticals with no man-in-the-loop, and no work related to a specific target site. The preparation involves establishing the knowledge base for the entire vertical and then the platform creates the bots automatically. The platforms robustness is measured by the quality of the information it retrieves (usually number of fields) and its scalability (how quick it can scale up to hundreds or thousands of sites). This scalability is mostly used to target the Long Tail of sites that common aggregators find complicated or too labor intensive to harvest content from.

8.) Semantic annotation recognizing: The Web pages may embrace metadata or semantic markups/annotations which can be made use of to locate specific data snippets. If the annotations are embedded in the pages, as Microformat does, this technique can be viewed as a special case of DOM parsing. In another case, the annotations, organized into a semantic layer, are stored and managed separately from the Web pages, so the Web scrapers can retrieve data schema and instructions from this layer before scraping the pages.

The above example are the techniques of Web scraping.


http://www.mozenda.com/web-scraping

SenseBot

Sensebot represents a new type of Search Engine that delivers a summary in response to your search query instead of a collection of links to web pages. SenseBots parses top results from the web and prepare a text summary of them. The summary serves as digest on the topic of your query, blending together the most significant and relevant aspects of the search results. The summary itself becomes the main result of your search. SenseBot's approach is unique and protected by Patent Pending.
SenseBot is a semantic search engine, meaning that it attempts to understand what the result pages are about. It uses text mining to parse web pages and identity their key semantic concept. It then performs multidocument summarization of content to produce a coherent.
Search engine are as much the first source of information as they are the starting point for research. Summarization of content of query result is an innovative technique to obtain an intelligent response from the system. This is the concept behind SenseBot and I am glad that Dmitri Soubbotin took the time to answer a few question on the development at SenseBot.

References
ARUN RADHAKRISHNAN, Summarization, Interview with Dmitri Soubbotin of SenseBot, Retrieved in December 11,2007.
URL

Clever Bot



Cleverbot is Carpenter’s latest chatting AI and uses a growing database of 20+ million online conversations to talk with anyone who goes to its website, and it can also say as computerized program that you can chat with. Cleverbot learns from humans and has Conversations with you.
At this stage in its development, talking to Cleverbot is like having a text conversation with a monkey tied to a typewriter as
it is being flung down a flight of stairs. Eventually though, automated conversationalists will become a staple of entertainment and business.
Already robotic voices answer the phone at many large corporations. One day Cleverbots may provide companionship that we won’t be able to discern from the real thing.
The Cleverbot test took place at the Techniche festival in Guwahati, India. Thirty volunteers conducted a typed 4-minute conversation with an unknown entity.
Half of the volunteers spoke to humans while the rest chatted with Cleverbot. All the conversations were displayed on large screens for an audience to see.
The picture above is test at http://www.cleverbot.com/
cleverbot.com is an website chat with a machine, and it will automatic reply user query that ask by user.

Comment

Reference
Jacob Aron,Software tricks people into thinking it is human, Retrieve in September 2011
URL:

Michael Davidson, How to outsmart a Clever Bot, Retrieve in 12 july 2011

Aaron Saenz, Cleverbot Chat Engine Is Learning From The Internet To Talk Like A Human,
Retrieve in 13 January 2010
URL:

Cleverbot

Wednesday, 19 October 2011

Web Bot-Webscraper

Webscraper
There is some really great webscraper software now on the market. Webscraper software can be an invaluable tool in the building of a new business and in any endeavor requiring extensive research. The new generation of programs incorporates an easy to use GUI with well-defined options and features. It has been estimated that what normally takes 20 man-days can now be performed by these programs in only 3 hours. With a reduction in manpower, costs are trimmed and project timelines are moved up. The webscraper programs also eliminate human error that is so common in such a repetitive task. In the long run, the hundreds of thousands of dollars saved are worth the initial investment.

Video of a screen scraper software, Competitor Data Harvest:
http://www.youtube.com/watch?v=uU91vOsS6xc&feature=player_embedded

Extract Data from the website
Data extraction from a site can be done with extraction software and it is very easy. What does data extraction mean? This is the process of pulling information from a specific internet source, assembling it and then parsing that information so that it can effectively be presented in a useful form. With a good data extracting tool the process will take a short time and very easily. Anyone can do this not to mention the simplicity it comes in when looking to extract and store data to publish it elsewhere. The software to extract data from the website is being used by numerous people, with amazing results. Information of all types can be readily harvested.

Free Data Mining Software
There is now a lot of free data mining software available for download on the internet. This software automates the laborious task of web research. Instead of using the search engines and click click clicking through the pages tiring and straining the eyes then using the archaic copy and paste into a second application, it can all be set to run as we relax and watch television. Data mining used to be an expensive proposition that only the biggest of businesses could afford but now there is free data mining software that individuals with basic needs can use to satisfaction. Many people swear by the free programs and have found no need to go further.

Create data feeds
Creating data feeds especially the RSS is the process of distributing the web content in an online form for easy distribution of information.RSS has enabled the distribution of content from the internet globally. Any type of information located in the web can be created into a data feed whether it is for a blog or a news release. The best thing about using a web scraper for these purposes is to ensure that your information is easily captured, formatted and syndicated. Cartoonists and writers in the newspaper business create data feeds for their work to be disseminated to readers. This process which has enhanced the sharing of information to many people has been made possible.

Deep Web Searching
The deep web is the part of the Internet that you can not see. It can not be found by traditional search engines and bots to find all the data and information. Deep web searching needs to be done by programs that know specifically how to find this information and extract it. Information found in the deep web are pdf, word, excel, and power point documents that are used to create a web page. Having access to this information can be very valuable to business owners and law enforcement, since much of it is all information that the rest of the public can not access.

Credits to:
http://www.mozenda.com/web-bot

Tuesday, 18 October 2011

Crossword puzzle

Attention! Our crossword puzzle is released. To those people who are interested to join our contest and fill up all the answer then send to author mail.
Good Luck!!!










Across

  1.  A search engine spider
  2.  other name of web crawler
  3.  retrieves information from websites
  4.  A software application that does repetitive and automated tasks in the Internet
  5. Send spam emails and typically work by automated
  6. using the Inktomi database 

Down

  1. A special type bot that are use to check prices
  2. Other name of web crawler
  3. Crawling, Indexing and Serving is the key processes in delivering search result.
Malware Bots



Like any program, a bot can be used for good or for evil. "Evil" web crawlers scan web pages for email addresses,
instant messaging handles and other identifying information that can be used for spamming purposes. Crawlers can also engage in
click fraud, or the process of visiting an advertising link for the purpose of driving up a pay-per-click advertiser's cost. Spambots send
hundreds of unsolicited emails and instant messages per second. Malware bots can even be programmed to play games and collect prizes from some websites.


Malware is type of bot which allows an attacker to gain complete control over

the affected computer.Computers that are infected with a 'bot' are generally referred to as 'zombies'.
There are literally tens of thousands of computers on the Internet which are infected with some type of 'bot' and don't even realize it.
Attackers are able to access and activate them to help execute DoS attacks against web sites, host phishing attack Web sites or send out thousands of spam email messages. Should anyone trace the attack back to its source, they will find an unwitting victim rather than the true attacker.
A dos can say as denial of service. malware bots that are used for injecting viruses or malware in websites, or scanning them looking for security vulnerabilities to exploit are most dangerous.
It’s no surprise that blogs and ecommerce sites that become target of these practices end up being hacked and injected with porn or Viagra links. All these undesirable bots tend to look for information that is normally off limits and in most situations, they completely disregard robots.txt commands.

Dectecting infection

The flood of communications in and out of your PC helps antimalware apps detect a known bot. "Sadly, the lack of antivirus alerts isn't an
indicator of a clean PC," says Nazario. "Antivirus software simply can't keep up with the number of threats. It's frustrating [that] we don't
have significantly better solutions for the average home user, more widely deployed."Even if your PC antivirus check comes out
clean, be vigilant. Microsoft provides a free Malicious Software Removal Tool. One version of the tool, available from both Microsoft
Update and Windows Update, is updated monthly; it runs in the background on the second Tuesday of each month and reports to Microsoft whenever it finds and removes an infection.

Reference
Amber Viescas, What are web bots[online], Retrieved in 25 july, 2001
URL:
http://www.ehow.com/info_8783848_bots.htmlg

Tony Bradley, What is a bot?[online]
URL:
http://netsecurity.about.com/od/frequentlyaskedquestions/qt/pr_bot.htm

Augusto Ellacuriaga, Practical Solutions for blocking spa, bot and scrapers, Retrieved in 25 june,2008
URL:
http://www.spanishseo.org/block-spam-bots-scrapers

Review of HotBot

HotBot, owned by Terra/Lycos, is one of older Web search engines. Originally it just used the Inktomi database and then added Direct Hit and the Open Directory. Then in Dec. 2002, it relaunched as a multiple search engine with Inktomi, Fast, Google, and Teoma. In July 2003, they stayed with the same four databases, but renamed them HotBot, Lycos, Google, and Ask Jeeves. Lycos was dropped in March 2004. This review covers HotBot using the Inktomi database, which they now call "HotBot." See the Google and Teoma (Ask Jeeves) reviews for more details on how their database and interface work, bearing in mind that not all features are available at HotBot. The basic search screen shows no options, but choose Advanced Search for the full range of search features. To see how HotBot used to work, see the old Search Engine Showdown Review. Use the table of contents on the left to navigate this review.

Databases: HotBot offers the choice of three search engine databases:
  • HotBot (which is actually a Yahoo!/Inktomi database, and the version reviewed here)
  • Google
  • Ask Jeeves (the Teoma databases
Strengths
  • Advanced searching capabilities
  • Quick check of three major databases
  • Advanced search help
Weaknesses
  • Does not include all advanced features of each of the four databases
  • No cached copies of pages
  • Only displays a few hits from each domain with no access to the rest in Inktomi
Reference:
Greg R. Notess, Review of HotBot(Online), Retrieved April 15, 2004

Friday, 14 October 2011

IBM WebFountain

WebFountain is an Internet analytics engine implemented by IBM for the study of unstructured data on the World Wide Web. IBM describes WebFountain as a set of research technologies that collect, store and analyze massive amounts of unstructured and semi-structured text. It is built on an open, extensible platform that enables the discovery of trends, patterns and relationships from data.

The project represents one of the first comprehensive attempts to catalog and interpret the unstructured data of the Web in a continuous fashion. To this end its supporting researchers at IBM have investigated new systems for the precise retrieval of subsets of the information on the Web, real-time trend analysis, and meta-level analysis of the available information of the Web.

Factiva, an information retrieval company owned by Dow Jones and Reuters, licensed WebFountain in September 2003, and has been building software which utilizes the WebFountain engine to gauge corporate reputation. Factiva reportedly offers yearly subscriptions to the service for $200,000. Factiva has since decided to explore other technologies, and has severed its relationship with WebFountain.

WebFountain is developed at IBM's Almaden research campus in the Bay Area of California.

IBM has developed software, called UIMA for Unstructured Information Management Architecture, that can be used for analysis of unstructured information. It can perhaps help perform trend analysis across documents, determine the theme and gist of documents, allow fuzzy searches on unstructured documents

http://www.redbooks.ibm.com/abstracts/redp3937.html
http://news.cnet.com/IBM-sets-out-to-make-sense-of-the-Web/2100-1032_3-5153627.html
Good News!!!
Hi everyone my name is Jason, Liew Wee Sheng, Chew Chu Chiang. We are organizing a crossword puzzle contest that topic are related to our blog. There are some interesting prize to give away to participant. First prize we will give out a 4 GB pendrive, Second prize we will give out a external usb hub and the third prize we will give an optical mouse.
Are you guys interested with our prize? Come to get our prize, just follow a few step to join our contest. First go to our blog www.webbotcly.blogspot.com then download the form and fill it that already given. The last send the form to our author mail.
So What are you guys waiting for, come to join our contest immediately, if you have any problem can send email to us.
Join Our Contest Now!!!

The video above is promoting our crossword puzzle contest, If your guys are interested, just fill in the form send it to our author mail. Thank you!

Facts on the Spambot

Basic
Spambots are designed to send spam emails and typically work by automated means, hence the word "bot." The messages are pre-typed and sent en masse to various email addresses. Responding to a spam bot will result in a return message that mimics a person, but is in actuality just a program. The emails will frequently contain links to where people can purchase the services or products the spambots claim to offer.
Forum Bots
Spambots will also post on forums and message boards. They will submit bogus content in an attempt at target marketing or display advertisements; promote a particular product or service while pretending to be an actual person. It may be difficult at times to discern whether or not a post is genuine or spam.
Email Harvesting
Spambots collect email addresses from postings on the Internet. Forums, blogs and websites may contain email addresses of potential consumers, which the spambot scans and adds to its database. It uses programs called spiders that scan the posted email addresses from web pages. The bots can use these email addresses and the web pages they were found on to send unsolicited advertisements to customers or even fake web pages designed to look official. These fake pages are meant to provoke customers into entering login information, which can lead to identity theft. This technique is called "phishing."

Reference
Brenton Shields, Facts on the Spambot, retrieved June 27, 2011

Fourth Poll Analysis














In this poll is talking about Web Bot keyword predicting helpful or not. The picture above was show the result of the vote. There are 13 voter in this question. There are 10 person who are vote keyword prediction is helpful, and 3 person vote Sometimes. In the pie chart , show that
73 % voter are vote in helpful, and 27% are vote sometimes and 0% vote no.

Conclusion
The poll result show that keyword prediction is helpful that agree by 13 voter. In the 13 voter there are no voter vote in" no it's not ", this show that keyword prediction are really helpful. But there are 3 voter vote in sometimes, therefore Keyword prediction still no in perfect.

Thursday, 13 October 2011

Shopbot



A shopbot is a special type of webbot. I also can be call as shopping robot, it can be use it to check prices with many retailers on the webbot. Shopping bot are comparison shopping web sites that help consumers find the best offered by online retailers. Shopping bots operate in similar way to search engines. This bot like software can send out on a mission, usually to find prices from different online store and report back to the user, so sometimes user called it as a intelligent agents

Type of Shopping Bot
There are two general types of shopping bots. The first is the broad-based type that searches a wide range of product categories such as MySimon.com and PriceScan.com. These sites operate using a Yellow Pages type of model, in that they list every retailer they can find.

Evaluating Shopping Bots

Even the best bots can't provide guarantees for finding the lowest price or search all retailers. Consumer advocates recommend evaluating prices found by at least three shopping bots before making a purchase.

Some bots are biased in what they report to consumers and give preferential treatment to their marketing partners. Results given to users may only be from retailers that pay a fee to be listed, so users may never learn about bargains from reputable low-cost sellers. Some bots sort prices so that the results from their marketing partners always appear at the top, rather than sorting by lowest price.

Helpful Shopping Bot Features:

  • Easy navigation
  • Well-organized results
  • Sorting by total cost (including shipping, handling, taxes, and restocking fees)
  • Prices from a large number of retailers
References
Gregg Keizer, PCW Print Web shopping: Bots and beyond, Retrieved November 28, 2001 Retrived in
URL:

David Dinning, What are spiders& Web Bot?, Retrieved July 16,2011
URL:



Friday, 7 October 2011

Web Bot: 1.2 billion dead in BP oil spill, Nov. 2010 nuclear war. Accurate? Will ETs intervene?

The Web Bot technology is now predicting a 1.289+ billion mega-death resulting from an “ill-wind” and the BP Gulf oil disaster. Researcher Clif High has published a prediction expecting a ‘tipping point’ around November 8, 2010 into global nuclear war, triggered by a mistaken Israeli-influenced attack on Iran that could come anytime after July 11, 2010.
The BP Gulf of Mexico oil catastrophe was arguably foreseen in prophetic scenarios as set out in various sacred prophecy texts such as the Book of Revelation and the Hopi Prophecy’s Seventh Sign, as well as the visions and prophecies of 20th and 21st century psychics, extraterrestrial contactees, and shamans such as psychic Edgar Cayce (1877-1945), Argentina’s ET contacteeBenjamin Solari Parravicini (1898-1974) and Zulu shaman Credo Mutwa (1921 – present).
The Web Bot ALTA report and Zulu shaman Credo Mutwa – both of whom arguably accurately predicted the BP Gulf of Mexico oil catastrophe before it occurred - are now predicting the BP Gulf oil catastrophe may one of the largest single human depopulation events in history.
A Web-Bot ALTA report dated June 21, 2010 predicts that “1.289+ billion people” may die from the catastrophic effects of the April 20, 2010 BP oil spill and related environmental impacts in a period starting mid-July 2010. The Web-Bot ALTA REPORT states that “The [oil volcano] subset continues to gain support in support of the [ill winds] area, and is still gaining support for those subsets indicating that 1.289+ billion people will perish as a result of the [ill winds] and the [oil volcano].”
According to Web Bot, this high death figure may come as a result of interactivity between the impact of the BP oil catastrophe and an expected global nuclear war starting around the period commencing November 8, 2010.
Zulu shaman and noted author Credo Mutwa on January 7, 2010 predicted an oil-related catastrophe, approximately two and one-half months before the April 20, 2010 BP Gulf oil spill occurred. On January 7, 2010, an individual who reportedly had just attended a meeting with Zulu shaman and author Credo Mutwa in Africa posted the following message on an internet chat board, “Credo Mutwa apparently just now said half the worlds population won’t see 2011 at a gathering where I'm attending. Some delegates have walked out because he didn't want to give an acceptable explanation, he just said ‘it's no asteroid, comet, plague, ... just OIL’
The current world population is estimated at 6.7 billion. If half the world’s population were to die of causes that can be originally tied to the BP Gulf oil catastrophe (as well as nuclear war), that would mean that approximately 3.3 billion persons would die if Credo Mutwa’s prediction came true. It should be noted that an alternative multi-dimensional ‘reading’ regarding the accuracy of Zulu shaman Credo Mutwa’s psychic prediction indicated that Credo Mutwa’s information may have been derived from “lower astral” dimensions influenced by reptilian factors that Mr. Mutwa tends to focus on. A review of the literature reveals a number of hypothetical worst-case scenarios for the ecological, biosphere, economic and social impact of the BP oil spill, as well as intentional international destabilization resulting in global nuclear war. One of these worst case scenarios is that the BP oil spill (and a possible 2010 global nuclear war) are part of an intentional depopulation plan, undertaken and designed by a Rothschild-Rockefeller led (or possible grey-reptilian extraterrestrial influenced) Malthusian elite to eliminate a substantial portion of present humanity – from one billion persons to half or more of our current human population. Examiner.com has reported there exists empirical research based on direct reports of abducted persons connecting a grey and hybrid extraterrestrial intervention strategy to a global environmental catastrophe. An analysis of the BP oil spill worst case scenarios and of the Web Bot and ALTA report technology itself suggests, however, that the Web Bot predictions may based on memes generated by the Web Bot and other worst case predictions themselves. In this case, a hacking of the rense.com website and illegal distribution of the Web Bot ALTA report containing the prediction of 1.2 billion dead from the BP oil spill may itself have led to the self-fulfilling meme magnification effect in the Web Bot prediction.
Examiner.com has reported on this short-coming of the Web Bot technology in our reporting on the “2012 catastrophe meme”: “(A) 2012 meme - One possibility is that the Web Bot technology may be detecting the presence on the Internet of an escalating meme regarding a ‘repetition of a Carrington-type [solar flare] event during the 2012-13 solar maximum,’ rather than an actual future event. This is the more probable 2012 reality.” Mr. High has also exhibited a tendency to refer readers to cataclysmic 2012-13 pole shift scenarios that are scientifically implausible.
The same methological shortcoming may apply to the Web Bot prediction of global nuclear war by November 2010. Such maverick leaders as former Cuban President Fidel Castro Ruz in a June 27, 2010 letter predicts a Web Bot-like scenario whereby a global nuclear war would erupt out of a U.S.-Israeli attack on Iran and would itself disrupt the world’s food supply and have incalculable effect on the environment. The Web Bot may be processing future memes of catastrophe and reifying them into actual war. Regarding the Web Bot’s reports of possible global nuclear war by Nov. 2010, Examiner.com has reported on the concerns of extraterrestrial civilizations about the detrimental dimensional impacts of possible global nuclear war. Examiner.com has reported that “There is converging objective predictive evidence, expert opinion, exopolitical policy analysis, and extraterrestrial contactee communications supporting a hypothesis that large scale “wild card” event(s), involving mass extraterrestrial (UFO) sightings or landings over major urban or other visible centers on the planet for peaceful purposes may occur during the period leading up to 2011-12 or beyond.”
Thus, should an unwise U.S. and Israeli attack on Iran escalate into global nuclear war, it is plausible, by objective evidence, that extraterrestrial civilizations may intervene to prevent such a war (or in aid of a false flag ET invasion).
This Examiner.com article explores the evidence that, despite the Web Bot and ALTA reports predictions of mega death from the BP oil disaster and possible 2010 (and beyond) nuclear war, the objective reality is that (1) mega death will probably not occur; (2) global nuclear war will not occur; and (3) Extraterrestrial civilizations may intervene to prevent a global nuclear holocaust destroying the ecology of Earth and humankind, or as a false flag ET operation to impose a global oppressive dictatorship on humanity.

Alfred Lambremont Webre, Seattle Exopolitics Examiner June 30, 2010
URL
http://www.examiner.com/exopolitics-in-seattle/web-bot-1-2-billion-dead-bp-oil-spill-nov-2010-nuclear-war-accurate-will-ets-intervene

Webbot Command Line Syntax

This is share about webbot command line syntax from w3c

The generic syntax is webbot options , url, and keywords
Robots.txt and HTML META tags

There are situations where you may not want the robot to behave as a robot but more as a link checker in which case you may consider using these options:

-norobotstxt
If you for some reason don't want the robot to check for a robots.txt file then add this command line option
-nometatags
If you for some reason don't want the robot to check for HTML robots related META tags then add line option

Distribution and Statistics Features
Note that if you are using SQL based logging then the set of statistics that can be drawn directly from the database is very high

-charset [ file ]
Specifies a log file of which charsets (content type parameter) were
encountered in the run and their distribution
-format [ file ]
Specifies a log file of which media types (content types)
were encountered in the run and their distribution
-hit [ file ]
Specifies a log file of URIs sorted after how many times they were referenced in the run
-lm [ file ]
Specifies a log file of URIs sorted after last modified date. This gives a good overview of the dynamics of the web site that you are checking.
-rellog [ file ]
Specifies a log file of any link relationship found in the HTML LINK tag (either the REL of the REV attribute) that has the relation specified in the -relation parameter (all relations are modelled by libwww as "forward"). For example "-rellog stylesheets-logfile.txt -relation stylesheet" will produce a log file of all link relationships of type "stylesheet". The format of the log file is
" --> "

meaning that the from-URI has the forward relationship with to-URI.
-title [ file ]
Specifies a log file of URIs sorted after any title found either as an HTTP header or in the HTML.


References
Henrik Frystyk Nielsen, Webbot Command Line Syntax, Retrived in 04/05/1999
URL :
http://www.w3.org/Robot/User/CommandLine.html

Thursday, 6 October 2011

ATTENTION! Coming soon contest


Hello everyone, we would like to inform you that we will be having a Crossword Puzzle contest coming soon. All of the puzzle question will related to our tittle which is "Web Bot".
Rules and Regulation
Participants should follow the rules below:
  • The age below than 18 are not allow to register for this contest

  • This contest is no any area limited, but must in Malaysia

  • Fill in the form accordingly to complete the registration for the contest, and submit the form to J11008245@hotmai.com, technoint@hotmail.com or

  • Name:
    IC Number
    Gender
    Email address
    State
    City
    Contact Number
    Download the from from this linkshttps://docs.google.com/document/d/100-8uw02XMxjKHQsBALqvaFKeO43kNEV-GPgswSLl7I/edit?hl=en_GB
  • Each person only allow submit one entry, if entry has been found duplicate person or cheating will be disqualify

  • complete the Crossword Puzzle and submit it in .txt file via email to J11008245@hotmai.com, technoint@hotmail.com or


  • The prize will be take from those who submitted the answer first

  • The most correct answer only will get the prize


  • PRIZE
  • First prize : 4GB Pendrive x1

  • Second prize: External USB hub x1

  • Third prize: Optical mouse x1
  • Internet Bots

    An Internet Bot is a software application that does repetitive and automated tasks in the Internet that would otherwise take humans a long time to do. The most common Internet bots are the spider bots which are used for web server analyses and file data gathering. Bots are also used to provide the required higher response rate for some online services like online auctions and online gaming.

    Web interface programs like instants messaging and Internet chat relay applications can also be used by Internet bots to provide automated responses to customers. These Internet bots can be used to give weather updates, currency exchange rates, sports results, telephone numbers, etc. Examples are Jabberwacky of Yahoo Messenger or SmarterChild by AOL instant messenger. Moreover, Bots may be used as censors in chat rooms and forums.

    Today, bots are used in even more applications and are even available for home and business use. These new bots are based on a code called LAB code which uses the Artificial Intelligence Mark-up Language. Some sites like Lots-A-Bots and RunABot offer these types of services where one can send automated IMs, emails, replies, etc.


    URL: http://rield.com/faq/web-robots

    http://www.tech-faq.com/internet-bots.html

    Third Poll Result Analysis






    The tittle in the last week "Do you think that having Web Bot in your life is good?". We had created a pie chart for votes result.


    The votes result showed that there are total only 12 persons voted. There are 7 persons voted "Yes" and 3 persons voted "No". There 2 persons voted "I'm not sure" means having Web Bot in their life is good or not.




    In conclusion, according to the chart above, most of the people voted having Web Bot in their life is good. That's mean Web Bot is very helpful for a lot of people. Besides that, we will always improve our blog and share more information for all of you.



    Friday, 30 September 2011

    Googlebot


    Crawling, Indexing and Serving is the key processes in delivering search result.

    Crawling
    Crawling is the process by which Googlebot discovers new and updated pages to be added to the Google index.
    Each websites visit bu googlebot it detect link on each page, then adds them to its list pages to crwal. New sites, change to existing sites, and dead links are noted and used to update the Google index.

    Indexing
    A process of each of the page it crawls by googlebot, will be compile a massive index of all the words it sees and
    their location of each page.

    Serving
    User enter a query, then google machines search the index for matching pages and return the results we believe are the most relevant to the user.

    Creadit to : Webmaster Tools Help, 7/22/2011

    Saturday, 24 September 2011

    Second poll result

    The picture that are show in right hand side has been show the result of vote that is 11 voter are voting this question. There was 8 people vote "yes" and 3 people was vote "don't know what is web-bot"






    We did a poll in the title of "are web-bot helping you?", and create a pie chat for the vote that are show in right hand side.







    Conclusion
    For the conclusion, web-bot the voter has give me the result that web-bot are helping them. Beside that there are 3 voter vote "don;t know what is web-bot, this mean that our information of web-bot need to improve to let people know that what is web-bot.
    Thank you for the people are vote to give me the result.
    This result show that I need to improve for posting more
    information of web-bot to let the reader know what is web-bot
    Thank you...

    Thursday, 22 September 2011

    podcast talking about what is webbot

    Briefly explain what is webbot by using podcast

    我是刘伟盛,web bot 抓取关键字是和谷歌一样的方法,web bot会在固定的时间抓取新的网页搜索结果.Web bot 人造扫描信息是一个很重要的概念,因为它有一定的设定限制来预言,这个也解释了为什么它不可能去预测任何的世界预言. 为什么呢?因为web bot的主要目的是找资料,就像谷歌一样可以摘取重要的信息,所谓的重要信息就是如果要找任何资料,只要打到有相关的字,它都会找到有关字的句子. 这些信息都是放在一个很大的资料库和它最终目的是为了比较同样的议题然后看是否得到同样的结论.

    我叫周子翔,Web bot检测相关的关键字.这网站包含的知识是人类写的,Web bot 摘取知识,找相关然后做出预测。Web bot不是自然的。Web bot收集,保持以及解释的知识是人类的脑做不到的。谷歌使用已经保存的资料来做很多东西然后给我们搜索的答案,可是他不能用来预测人类没有操控。我个人觉得Web bot在未来会有更好的发展机会,我要在提醒的是他是人造的,所以他也是有限制的能力。

    我是Yao Weng Sheng, 在这里,我们必须要相信,因为web bot拥有一个我们人类没有的东西,那就是和拼整个网络上所有的信息,并试图寻找一个相关的权利,因此web bot可以去比人类更远。但是web bot做不到人类可以预测的东西。如果这是真的有可能预测令人难于自信的东西,谷歌将会把每一个问题的答案,它可以从网页中摘取相关的信息,但他有一个限制,谷歌使用它搜集的数据做了很多其他的东西,不是给你们的的搜寻结果,但如果他们试图控制,他们将无法预测人类做过的事情,这些事情都跟Web bot有关系,但是它比谷歌还要小.
    I am Liew Wee Sheng, Web bots simply crawl the web the same way Google crawls it at regular intervals to catch new and existing web sites and detect relevant keywords. This is the most important concept of web bots because it sets the limits to what the web bots are able to predict. This also explains why it’s impossible to predict any end of the world prophecy. Why is that? The main goal of a web bot is simply to crawl the web the same way Google would do it to extract important information from websites. That important information is usually the most relevant keywords on a website put with a certain algorithm that is able to get the meaning of the sentences the keywords are used in. This information is then put in a large database and the final goal is to compare similar topics to determine if they each point towards similar conclusions.

    I am Chew Chu Chiang, The web contains information is written by humans, the web bots crawl that information, find correlations and make predictions. There is nothing super-natural in web bots. Gather ,keep and interpret information the human brain can’t. Google use the data to collect then do a lot of other stuff than giving you search results, but they wouldn't be able to predict things humans don’t have control over.I think that web bot will have a nice future but keep remind that web is man made and the bot crawl the web,there will have their own limit for the capacities.

    I am Yao Weng Sheng, We have to be careful here, because web bots do have a power we don’t have: to merge all that information across the web and try to find a correlation. So, they can go a little further than what a single human can do, but they can’t go any further than what’s possible to predict by humans If it was really possible to predict incredible stuff, Google would have the answer to every question. Wait a minute…They do! Seriously, it’s possible to extract relevant information from the web but it has a limit. Google use the data it collect to do a lot of other stuff than giving you search results, but they wouldn’t be able to predict things humans don’t have control over if they tried. That’s the same thing for web bots, except it’s much more smaller than Google.

    Friday, 16 September 2011

    The TimeWave

    The TimeWave is a mathematical program that purports to measure the ebb, flow and rate of novelty in our world. The TimeWave depicts increasingly greater magnitudes of novelty as we approach Dec. 21, 2012—the day the Mayan Long Count Calendar starts anew. The TimeWave was developed independently of Mayan Calendar knowledge.TimeWave theory is based on the mathematics of the ancient Chinese divinatory system known as the I Ching or book of changes. The famed ethnobotantist Terrence Mckenna is the person responsible for developing TimeWave theory. The TimeWave takes the form of a software program that generates a wave graph plotting a timeline over 4000 years in duration.The TimeWave can be mapped using 5 different number sets. Certain versions are said to be more ” mathematically sound” than others. They all end on the same date of Dec 21, 2012, but the wave patterns of peaks and valleys vary among the different versions.
    The Sheliak, Watkins and Kelly versions match up well in general terms and the Watkins’ and Kelly versions are nearly identical. The Franklin and Huang Ti versions are the most divergent.

    Credits to:

    Wednesday, 14 September 2011

    Web Spider

    Web spiders are software agents that traverse the Internet gathering, filtering, and potentially aggregating information for a user. Using common scripting languages and their collection of Web modules, you can easily develop Web spiders. Web spider are helping people searching internet efficient and easy.Web spiders to crawl the Web pages on the Internet, return their content, and index it..Web spider look for in text is relevant content.Spider bot can only scan text and their follows link,so that image and graphic in web page no meaning to search engine bot for indexing.
    spider detection:
    Google and Yahoo are could be recognized by user agent string.After user agent has been detected,next is to check IP.If it is true,you can sure that this is the real search engine spider.That is good to know IP ranges of search engine spiders because you will find User Agent string
    Some IP and resolved IPs you can use to detect search engines web spiders:
    Google**: 66.249.64.* to 66.249.95.*, crawl-66-249-* , *.googlebot.com
    Yahoo: 72.30.* , 74.6.* , 67.195.* , 66.196.* , *.crawl.yahoo.net , *.inktomisearch.com
    MSN/LIVE/BING :65.54.* , 65.55.* , msnbot.msn.com , *.search.live.com
    Fake Google spiders spotted from 66.249.16.* (Google IPs are from 66.249.31.xxx)

    These IPs are for example only, for better detection you need to use longer IP, i.e. 65.55.252.* for MSN, to be sure that is not some another spider. Best is to check WhoIs to get IP range.

    Credits to:




    Tuesday, 13 September 2011

    Web Bot Project Predictions and 2012

    Web Bot, or the Web Bot Project, refers to an Internet bot software program that is claimed to be able to predict future events by tracking keywords entered on the Internet. It was created on 1997, originally is to predict stock market trends. The Web projects uses "Spiders", a technology that used by search engines like GOOGLE, YAHOO ETC, to crawl the web and search the words that you had inserted into the search engine. When a noted keyword is located, the bot records the text before and after the keyword that we inserted into the search engine. This record of text then gets sent to the program to filter and define the meaning.


    The earliest and most eerie of the big predictions came in June of 2001.The Web Bot program indicated that a life major life-changing event would take place within the next few months. Based on the web chatter picked up by the Web Bot they concluded that a major event will take place soon. Unfortunately, the Web Bots proved to be prophetic as the World Trade Center and the Pentagon was attacked on September 11th, 2001.

    According to the Web Bot Project, the Web Bot predicts major calamitous events to unfold in 2012!


    Credits to:
    http://2012supplies.com/what_is_2012/web_bot_2012.html

    Thursday, 1 September 2011

    Type of web bot: crawler


    A web crawler is a type of bot, and also know as spider,bot or robot.
    A crawlers used to automating maintenance tasks on a Web site,such as identifies links or checking validating HTML code
    Is a program broweses the world wide web in methodical.
    Googlebot discovers new and updated page to added to google index that is call crawling


    How does web crawler works
    It is a program that download seed from the world wide web, extracts the links contained in that page,
    and take the page those links refer to, extracts the links in thse page.

    Credits to:

    back to top