Enable javascript in your browser to view an important message.

Friday, 30 September 2011

Googlebot


Crawling, Indexing and Serving is the key processes in delivering search result.

Crawling
Crawling is the process by which Googlebot discovers new and updated pages to be added to the Google index.
Each websites visit bu googlebot it detect link on each page, then adds them to its list pages to crwal. New sites, change to existing sites, and dead links are noted and used to update the Google index.

Indexing
A process of each of the page it crawls by googlebot, will be compile a massive index of all the words it sees and
their location of each page.

Serving
User enter a query, then google machines search the index for matching pages and return the results we believe are the most relevant to the user.

Creadit to : Webmaster Tools Help, 7/22/2011

Saturday, 24 September 2011

Second poll result

The picture that are show in right hand side has been show the result of vote that is 11 voter are voting this question. There was 8 people vote "yes" and 3 people was vote "don't know what is web-bot"






We did a poll in the title of "are web-bot helping you?", and create a pie chat for the vote that are show in right hand side.







Conclusion
For the conclusion, web-bot the voter has give me the result that web-bot are helping them. Beside that there are 3 voter vote "don;t know what is web-bot, this mean that our information of web-bot need to improve to let people know that what is web-bot.
Thank you for the people are vote to give me the result.
This result show that I need to improve for posting more
information of web-bot to let the reader know what is web-bot
Thank you...

Thursday, 22 September 2011

podcast talking about what is webbot

Briefly explain what is webbot by using podcast

我是刘伟盛,web bot 抓取关键字是和谷歌一样的方法,web bot会在固定的时间抓取新的网页搜索结果.Web bot 人造扫描信息是一个很重要的概念,因为它有一定的设定限制来预言,这个也解释了为什么它不可能去预测任何的世界预言. 为什么呢?因为web bot的主要目的是找资料,就像谷歌一样可以摘取重要的信息,所谓的重要信息就是如果要找任何资料,只要打到有相关的字,它都会找到有关字的句子. 这些信息都是放在一个很大的资料库和它最终目的是为了比较同样的议题然后看是否得到同样的结论.

我叫周子翔,Web bot检测相关的关键字.这网站包含的知识是人类写的,Web bot 摘取知识,找相关然后做出预测。Web bot不是自然的。Web bot收集,保持以及解释的知识是人类的脑做不到的。谷歌使用已经保存的资料来做很多东西然后给我们搜索的答案,可是他不能用来预测人类没有操控。我个人觉得Web bot在未来会有更好的发展机会,我要在提醒的是他是人造的,所以他也是有限制的能力。

我是Yao Weng Sheng, 在这里,我们必须要相信,因为web bot拥有一个我们人类没有的东西,那就是和拼整个网络上所有的信息,并试图寻找一个相关的权利,因此web bot可以去比人类更远。但是web bot做不到人类可以预测的东西。如果这是真的有可能预测令人难于自信的东西,谷歌将会把每一个问题的答案,它可以从网页中摘取相关的信息,但他有一个限制,谷歌使用它搜集的数据做了很多其他的东西,不是给你们的的搜寻结果,但如果他们试图控制,他们将无法预测人类做过的事情,这些事情都跟Web bot有关系,但是它比谷歌还要小.
I am Liew Wee Sheng, Web bots simply crawl the web the same way Google crawls it at regular intervals to catch new and existing web sites and detect relevant keywords. This is the most important concept of web bots because it sets the limits to what the web bots are able to predict. This also explains why it’s impossible to predict any end of the world prophecy. Why is that? The main goal of a web bot is simply to crawl the web the same way Google would do it to extract important information from websites. That important information is usually the most relevant keywords on a website put with a certain algorithm that is able to get the meaning of the sentences the keywords are used in. This information is then put in a large database and the final goal is to compare similar topics to determine if they each point towards similar conclusions.

I am Chew Chu Chiang, The web contains information is written by humans, the web bots crawl that information, find correlations and make predictions. There is nothing super-natural in web bots. Gather ,keep and interpret information the human brain can’t. Google use the data to collect then do a lot of other stuff than giving you search results, but they wouldn't be able to predict things humans don’t have control over.I think that web bot will have a nice future but keep remind that web is man made and the bot crawl the web,there will have their own limit for the capacities.

I am Yao Weng Sheng, We have to be careful here, because web bots do have a power we don’t have: to merge all that information across the web and try to find a correlation. So, they can go a little further than what a single human can do, but they can’t go any further than what’s possible to predict by humans If it was really possible to predict incredible stuff, Google would have the answer to every question. Wait a minute…They do! Seriously, it’s possible to extract relevant information from the web but it has a limit. Google use the data it collect to do a lot of other stuff than giving you search results, but they wouldn’t be able to predict things humans don’t have control over if they tried. That’s the same thing for web bots, except it’s much more smaller than Google.

Friday, 16 September 2011

The TimeWave

The TimeWave is a mathematical program that purports to measure the ebb, flow and rate of novelty in our world. The TimeWave depicts increasingly greater magnitudes of novelty as we approach Dec. 21, 2012—the day the Mayan Long Count Calendar starts anew. The TimeWave was developed independently of Mayan Calendar knowledge.TimeWave theory is based on the mathematics of the ancient Chinese divinatory system known as the I Ching or book of changes. The famed ethnobotantist Terrence Mckenna is the person responsible for developing TimeWave theory. The TimeWave takes the form of a software program that generates a wave graph plotting a timeline over 4000 years in duration.The TimeWave can be mapped using 5 different number sets. Certain versions are said to be more ” mathematically sound” than others. They all end on the same date of Dec 21, 2012, but the wave patterns of peaks and valleys vary among the different versions.
The Sheliak, Watkins and Kelly versions match up well in general terms and the Watkins’ and Kelly versions are nearly identical. The Franklin and Huang Ti versions are the most divergent.

Credits to:

Wednesday, 14 September 2011

Web Spider

Web spiders are software agents that traverse the Internet gathering, filtering, and potentially aggregating information for a user. Using common scripting languages and their collection of Web modules, you can easily develop Web spiders. Web spider are helping people searching internet efficient and easy.Web spiders to crawl the Web pages on the Internet, return their content, and index it..Web spider look for in text is relevant content.Spider bot can only scan text and their follows link,so that image and graphic in web page no meaning to search engine bot for indexing.
spider detection:
Google and Yahoo are could be recognized by user agent string.After user agent has been detected,next is to check IP.If it is true,you can sure that this is the real search engine spider.That is good to know IP ranges of search engine spiders because you will find User Agent string
Some IP and resolved IPs you can use to detect search engines web spiders:
Google**: 66.249.64.* to 66.249.95.*, crawl-66-249-* , *.googlebot.com
Yahoo: 72.30.* , 74.6.* , 67.195.* , 66.196.* , *.crawl.yahoo.net , *.inktomisearch.com
MSN/LIVE/BING :65.54.* , 65.55.* , msnbot.msn.com , *.search.live.com
Fake Google spiders spotted from 66.249.16.* (Google IPs are from 66.249.31.xxx)

These IPs are for example only, for better detection you need to use longer IP, i.e. 65.55.252.* for MSN, to be sure that is not some another spider. Best is to check WhoIs to get IP range.

Credits to:




Tuesday, 13 September 2011

Web Bot Project Predictions and 2012

Web Bot, or the Web Bot Project, refers to an Internet bot software program that is claimed to be able to predict future events by tracking keywords entered on the Internet. It was created on 1997, originally is to predict stock market trends. The Web projects uses "Spiders", a technology that used by search engines like GOOGLE, YAHOO ETC, to crawl the web and search the words that you had inserted into the search engine. When a noted keyword is located, the bot records the text before and after the keyword that we inserted into the search engine. This record of text then gets sent to the program to filter and define the meaning.


The earliest and most eerie of the big predictions came in June of 2001.The Web Bot program indicated that a life major life-changing event would take place within the next few months. Based on the web chatter picked up by the Web Bot they concluded that a major event will take place soon. Unfortunately, the Web Bots proved to be prophetic as the World Trade Center and the Pentagon was attacked on September 11th, 2001.

According to the Web Bot Project, the Web Bot predicts major calamitous events to unfold in 2012!


Credits to:
http://2012supplies.com/what_is_2012/web_bot_2012.html

Thursday, 1 September 2011

Type of web bot: crawler


A web crawler is a type of bot, and also know as spider,bot or robot.
A crawlers used to automating maintenance tasks on a Web site,such as identifies links or checking validating HTML code
Is a program broweses the world wide web in methodical.
Googlebot discovers new and updated page to added to google index that is call crawling


How does web crawler works
It is a program that download seed from the world wide web, extracts the links contained in that page,
and take the page those links refer to, extracts the links in thse page.

Credits to:

back to top