Tuesday, September 25, 2012

How can PayPal boost online economy of Pakistan?

PAYPAL IS the global standard for payment transfers and for carrying online transactions. It’s a swift mode of payment that allows people to send and receive payments instantly.
Besides, majority of E-Commerce web sites rely on PayPal for charging transaction amounts from the customers.
The common problem for Pakistan’s 25 million plus internet users is that PayPal service is not available to them. It is either due to policies of PayPal or due to the hindrances it faces from local banks and financial governance institutions.
Many experts hint that PayPal has not yet started its operations in Pakistan due to technical as well as law and order concerns. Factors such as money laundering, funding of terrorists, weak banking infrastructure in Pakistan and lack of cooperation on part of the State Bank and local authorities may be hindering PayPal to land in Pakistan.
Who can play a role for removing those hindrances?
The government departments such Pakistan Software Export Board (PSEB), the industry support organizations such as Pakistan Software House Association (PASHA) and the infrastructure development departments like Pakistan Telecommunication Authority (PTA) can play an anticipative role for removing these hindrances that confine PayPal from coming into Pakistan.
Also it is the responsibility of government to improve the law and order situation in the country and make strict laws to curtail cyber crimes. Passing of an anti-laundering bill is another need that the government should resolve to pave the way for PayPal in Pakistan.
PayPal and Online Eco System Based on ‘Cash-on-Delivery’
Today most E-commerce vendors in Pakistan are relying on Cash-on-Delivery system in which payments are received at the door step of the consumer at the time of delivery of goods.
For businesses it means putting extra resources for payment collection which means an additional cost of doing the business. It also involves risk as many scams occur while delivering goods to far flung areas in the country.
Virtual stores can help businesses reduce costs
Moreover, selling goods offline involves more cost as the vendors have to open physical stores at different locations. Online stores can help them set up virtual stores through which they can sell anywhere in Pakistan.
If awareness is created to use online stores it can boost business activities throughout the country. Coming of PayPal would allow the local stores to grow their businesses which would consequently result in jobs creation. It will also encourage entrepreneurs to establish new internet-based businesses.
PayPal and Exports
Pakistan is lagging behind other developing countries such as India having 120 million internet users. Pakistani IT companies are not able to receive and send payments easily and find it difficult to sell their software and services online.
Here are given some statistics that will reveal the potential of PayPal in the modern global economy:
Total volume of transactions carried through PayPal in the year 2011 was $118 billion.
Annual online spending by US was 202 billion by the end of year 2011. It will increase by 226 billion in 2012 and 327 billion in 2016.
The IT professionals and freelancers in Pakistan face numerous problems when it comes to receiving payments from clients. Similarly, it is difficult for the software companies to sell their products internationally.
If PayPal starts to operate in Pakistan it will reduce transaction cost and hassles. Currently, the local businessman relies on less reliable modes of payments which are not only slow but are costly too. These are wire transfer, Western Union, Money Bookers, Payoneer.

A journey from Amber to Fiber


“ELECTROMAGNETICS” IS at the heart of everything that is done with electricity. It concerns itself with the forces that charge particles exert upon each other. Electromagnetics is a word that was coined in the late 1800s to denote a newly discovered phenomenon that was the combination of what previously had been thought to be completely separate phenomena: electricity and magnetism.
The effects of electric charge have been known since 600 B.C. History records that the ancient Greeks discovered that Amber, a hard, translucent resin, attracted bits of material after it was rubbed with fur. Nearly 2,000 years passed before William Gilbert realized in the early 1600s that this same effect could be observed when rubbing a variety of substances together. It was he who coined the term “electric” using the Greek word for Amber, “elecktron”. In 1660, Otto Von Guericke invented a machine that produced static electricity and Robert Boyle discovered in 1675 that electric force could be transmitted through vacuum and observed attraction and repulsion. The first indication that electricity can move from one place to another came from experiments conducted by Stephen Gray in 1729. He found that when two objects were connected by a tube, both could be electrified when only one was rubbed. In 1733, Charles Francois du Fay discovered that electricity comes into two forms, which he called resinous (-) and vitreous (+). Benjamin Franklin and Ebenezer Kinnersley later renamed the two forms as positive and negative. This discovery (of Stephen Gray) led J.T. Desaguliers in 1739 to the discovery of a class of materials called “Conductors” that pass electricity easily. In 1745, a Dutch physicist, Pieter Van Muschenbroek invented the “Leyden Jar” the first electrical capacitor used to store static electricity and in 1747 William Watson discharged it through a circuit that began the comprehension of current and circuit.
In 1750, John Michel theorized that permanent magnet has north and south poles that attract or repel each other according to an inverse-square law that is similar to Coulomb’s law of force. In 1752, Ben Franklin conducted his famous kite experiment. He invented lighting rods and sold them throughout colonial America. The first quantitative theories of magnetism were advanced in the 18th century. In the year 1800, Volta developed the first chemical battery, which consisted of strips of dissimilar metals immersed in a weak electrolyte. The first evidence that electric and magnet phenomena are related came from Hans Christian Orested, who, in 1819, discovered that a steady current could move a compass needle, just as a permanent magnet can. This was closely followed by Andre-Marie Ampere’s discovery that electric currents exert attractive and repulsive forces on each other. In 1820, D.F. Arago invented the electromagnet. One of the most important series of experiments was performed by George Simon Ohm in 1826; he showed that when a constant voltage is applied to a conductor, the resulting current is proportional to the conductor’s cross-sectional area and inversely proportional to its length. Another important experimental connection between electric and magnetic effects was discovered by Michael Faraday in 1831. He conducted an experiment whereby two insulated wires were wrapped around an iron core and found that when the current in one winding was switched, a voltage was induced in the other and finally developed Transformer. In 1837, Samuel Morse invented the telegraph and in 1858, transoceanic telegraph cable was laid.
With the discovery of Faraday’s law, the stage was setfor the development of a complete theory of Electromagnetism. This was accomplished by James Clerk Maxwell, a professor of experimental physics at Cambridge University. In 1873 he published “A Treatise on Electricity and Magnetism”. In this work, he proposed that just as time-varying magnetic fields can produce electric fields, the opposite is also true. Adding this conjecture to what was already known about electricity and magnetism, Maxwell produced his now-famous system of equations called Maxwell’s equations. In 1876, Alexander Graham Bell invented the telephone and in 1879, Thomas Alva Edison invented the light bulb. Edison directed the operation of the first central commercial incandescent electric generating station in the country. In 1882, it provided electricity to one square mile in New York City. The definitive experimental verification of Maxwell’s theory came in 1886 through a series of experiments conducted by Heinrich Hertz. He discovered Electromagnetic wave in 1888.
The most dramatic application of the new Electromagnetic theory came in 1901 when Guglielmo Marconi sent the first wireless telegraph signals across the Atlantic Ocean. The next two decades saw a host of developments in antennas, amplifying devices, and modulation techniques, culminating in the first commercial radio broadcasting in the early 1920s. Television soon followed in the early 1930s, followed by the radar in the late 1930s. Wireless communication is probably the most conspicuous application of Electromagnetics, since it involves the propagation of Electromagnetic waves through air or space. Nevertheless, Maxwell’s Electromagnetic theory has been equally important in the development of a host of other engineering applications. Other devices and systems in whose development electromagnetic theory played an important part include the vacuum tube (1906), the magnetron (1940), the transistor (1950), the laser (1960), and fiber-optic systems (late 1970s). Starting with the operation of TAT-8 in 1988 (8th transatlantic telephone cable), there is widespread adoption of systems based on Fiber optics from 1990. In fact, it is safe to say that Electromagnetic theory has been an essential ingredient in the development of every electrical device or system that we now take for granted.

Water based vehicle

 
ONE OF the engineering’s primary challenges lies in solving the world’s energy crisis. A major part of the energy solution lies in eliminating fossil fuels as transportation fuels. The idea of a car running on sun and water would have been viewed as science fiction as recently as 10 years ago.
Dr. Ghulam Sarwar, one of the brilliant scientist succeeded to have run a car with water as a fuel. Dr. Ghulam Sarwar belongs to Lahore. He did his research in the National Scientific And Educational Research Center near Jehlum. He was awarded PhD degrees in economics, engineering and transport by British universities. He had hounourable post as scientist officer in British Ministry of Transport and after retirement instead of enjoying the remaining life there he preferred to come back to his homeland and do something productive for it. Nowadays, he is teaching many students about scientific researches and inventions.
He worked on the principle of splitting water into two components Hydrogen and Oxygen and then use it as fuel. This system incorporates renewable energy and water to produce a fuel of tomorrow. Hydrogen and Oxygen gas spilt from water. We can “Hydroxy gas”. The molecules are bonded together positively in a ortho-hydrogen state, 2.4x’s to 4x’s more powerful than the normal “para hydrogen” The common hydrogen “para hydrogen” the opposing negative state of hydrogen that can be pressurized into a metal tank or bottle. Now we can store it in plastic bottles that can take high pressure for long periods that we see in pressured into a liquid metal bottle of gas. Hydroxy gas cannot be stored this way; it is too powerful to be stored in a tank. So we can make the gas as drive. Being illegal to run a car just off water on a US highway is a crime in the USA. Why this individual would drive without paying any out of the gasoline? He would stop supporting the government gas tax, every time he fills up.
Water Powered Cars or Hydrogen/Oxygen Powered Cars, using 100 per cent water as fuel is real. By splitting water by electrolysis and creating hydrogen/oxygen gas, you can replace gasoline. We have been taught, this is impossible; engineers, scientists and professors may in fact tell you that you’re crazy to believe such nonsense. They will also set out to prove you wrong. They base their laws of physics form 1825 thinking, Faraday’s laws. Did you know that the first ICE engine ran on hydrogen from water? BMW has them! Hyundai will be making them. Japan endorses them. Many patents, inventions have been bought and their project is shelved, yes, it is true. Some of the stubborn inventors who don’t sell out disappear. Yes that’s right. It happens in the US, Australia, NZ, UK and India. We are in a crunch to find alternative fuels. The pressure is on. War for oil is not the way to go. Talking politics about a hydrogen future that never comes is not going to help either.
If all ICE motors (Internal Combustion Engines) are converted to burn hydrogen and oxygen in the water, as fuel to propel our cars, trucks, etc, we would no longer need gas stations, oil tankers, refineries, SMOG and war. The only problem would be that the large oil corporations would evaporate. The government is worried that national securities would be ruptured & it would cause the economy to fall? The Ozone would get healed and we would survive. If we stop producing smog, the Ozone and global warming and greenhouse effects would go away. The present US administration and DOE (US Dept. of Energy) seems to have not wanted this to happen, they make too much money selling you gas and feeding you the media. Only outlaws drive water powered cars, according to our present laws. Why there are not making any smog for us to breathe. Fighting for oil under the sand never made any sense to me. They feed us a conspiracy about 19 Arabs with box cutter knives, that took down the Twin Towers, when in fact, C4 explosives, a planned demo, took them down. A bomb went off in the basement, before the first plane struck the North Tower and many more incidents like this.
As ridiculous as it seems and hard to believe, we could all be running our cars from water. Together we can heal the Ozone. Stop fighting over oil under sand. Best of all take the grip off around our necks at the gas pumps. Not only we can get rid of all the fuss of CNG & petroleum shortage and long lasting waiting lines on pumps but also from many taxes like war tax of 1965 & 1971 still paying with petrol bills although war is over from many decades. Now it’s up to you either you want to come forward to join the small chain of rational revolutionists or want to promote helly politics of self-centered monsters.

Ensuring visibility in network traffic

EVER FACED a problem in retrieving data after the system recovers from a crash? If a company has robust Network Management in place, there are little chances of losing data. Similarly, recklessness towards network and information that flows across a network cannot only impede the speed of the work, but also can affect the progress of a business.
In addition to this, the apathetic behaviour of a network is exasperating for the users connected to the network, but what it is that actually causes the network to become passive and torpid?
Network Traffic
What happens when one is stuck in bumper to bumper traffic while going back to home from office? The vehicles move at a snail’s pace and it takes a lot of time for the person to reach his destination. The same situation occurs along with some other glitches when there is an information/data overload on the network. The data or information that travels across network in the form of packets constitutes network traffic, which is the major reason behind unresponsive network.
Rather, it is a weakness of network monitoring that raises visibility issues, but why it is important to ensure visibility and what can be done to achieve visibility?
Steps to Ensuring Visibility
There are three ways by which greater visibility can be created: accessing network traffic; monitoring of activities; data analysis. To ensure visibility in the network traffic, it is important to have access to data or packets that are being sent and received across network. By having access to data, it is possible to identify and distinguish data that is flowing with no commotion and the data, which is causing upheavals, but what are the ways to access data and are they all risk-free and reliable?
TAP and SPAN have been found as the two main methods of accessing data, TAP being more reliable. Initially, SPAN is inexpensive, but it becomes costly when packets drop in the mid-way before reaching the monitoring tool. This situation further aggravates that problem and makes it impossible to troubleshoot. SPAN almost fails in case of amplified internal traffic. TAP, however, allows continuous visibility in the network and rate of disruptions is lowest.
Access to data just helps in seeing and keeping an eye on the type and number of data flowing through and across the network. It doesn’t help one in knowing how the bandwidth is being used and who is guzzling more of it. Nevertheless, if the data and activities are monitored, it becomes easy to find this out. Moreover, it allows one todetermine if users are using file sharing programmes and one can also detect the presence of Trojan that is clandestinely passing on the information in the background. However, selecting a most appropriate network monitoring tool is the job of network administrator. Monitoring of data leads to capturing and analysis of data, but what if analysis of all captured data is not required?
Monitoring actually helps in filtering the data. Once it is filtered, it becomes easy to analyze data that requires analysis.  The analysis helps in identifying the underlying causes behind less visibility and network issues and leads to solutions.
The communication in the form of packets travels from user to server via router and the server then responds back to the user through router. The problems creating visibility issues can be solved either through replacing servers or increasing bandwidth i.e. the speed of the network. Bandwidth seems a skimpy hitch yet it is not that is why many companies, quite frequently, are found switching to fastest and most reliable internet connections. Actually, the process of flow of packets is affected by the speed of the type of connectivity. Simply putting ISDN cannot work in a large enterprise as it engages telephone lines and its speed ranges between few Kpbs. DSL is a better option as it doesn’t employ telephone lines, but wireless connection is the most in concept these days. It allows always-on connection, but in order to have access to it, it is must to be within a network coverage area geographically.
What does visibility yield?
Achieving visibility implies achieving cost savings and return on investment. The presence of malicious software, error counts, and identification of which protocols are using greatest bandwidth are valuable information gained through network monitoring. In short, monitoring helps in uncovering problems in network and allows provision of more network resources to the users connected to the network, thus yielding savings and more return.
A business relying on network needs to have a simple yet robust network. In the present times, where every company is making efforts to protect and safeguard their systems and information, it is important to ensure enhanced visibility in the network traffic. Even a small negligence can lead to serious repercussions such as data loss, slow processing, silent transmission and leak of the information, and business failure, which no firm wants.

Factors to consider before buying a Solar PV system


OWING TO the unprecedented level of power shortage being faced throughout the major cities and villages of Pakistan, a substantial demand of quick and reliable power solutions for household and industrial application has been observed. People have now started understanding that quick fixes in the form of UPS (Uninterrupted power supplies) and petrol/diesel generators are not a long term and sustainable solution to the impending power crisis, which is here to stay.
In scenarios such as these solar PV systems have shown great acceptability amongst the masses, not only because solar PV systems can be quickly installed on demand but can also be designed to be a dependable source of power. Pakistan is amongst the few countries which are blessed with tremendous solar resource potential year out, and thus solar PV systems can be a commercially viable alternative for a number of applications. However, unfortunately seeing this exponential growth in the off grid solar PV market a significant number of “easy money” oriented opportunists and make shift companies have started to sell these solar PV systems without taking into account the numerous engineering design factors and design practices that have to be brought under consideration before such an installation is actually made. The current situation is pretty clear; traders are selling solar PV system equipment like vegetables. The lack of knowledge and awareness amongst the customers result in bad deals (system failures or equipment failures) which in turn damage a prospective solar market and the possibility of wide spread applicability of solar energy.
This article’s objective is to educate solar PV customers of the factors that have to be brought into perspective before any solar PV system is bought. Overlooking these factors would otherwise make the PV system vulnerable to performance failures and which could also possibly lead to the damage of system components. The major components of an off grid solar PV system includes the photovoltaic module, charge controller, inverter and the batteries.
A solar PV module is the most sophisticated component of the complete system, designing a solar system energized by PV modules requires the thorough study of the performance behavior of a solar module in outdoor conditions. PV module manufacturers rate their panels at standard testing conditions based on the IEC 61215 (Crystalline) and IEC 61646 (Thin Films). PV Module performance or output in outdoor conditions is strongly linked with three important factors which are irradiance, cell temperature, air mass. Other factors which also influence PV module outputs are wind speed, shading, dust accumulation, module inclination and humidity, all of these factors contribute to the net wattage produced by the module.
With so many varying conditions worldwide a generalization was essential to compare different PV panel manufacturers, and this is where the standard testing conditions (STC) were introduced. STC specifies the irradiance to be 1000W/m2, cell temperature to be 25C and air mass to be 1.5. This means that when STC prevails the panel will produce its rated output; however, in actuality such conditions are rarely ever reached. Pakistan is a country where there are generally high ambient temperatures which result in higher cell temperatures and thus resulting in a reduced output. The solar irradiance also generally varies from 100W/m2 to 1000W/m2 throughout the day. Thus knowing the actual output of the PV panel under all the stated conditions is crucial for designing a reliable solar PV system. Amateur designers of solar PV systems tend to make a lot of generalization while designing, like for e.g. assuming the amount of sunshine hours in which STC conditions are assumed. This leads to the systems which are likely to fail under the deviating conditions. All contingencies like the variability of environmental conditions and their influence on all the components must be taken into account.
Batteries are another component which requires a careful study and understanding before it can be deployed with any system. The batteries which should be used should have the capacity of undergoing deep cycle discharges that is the depth of discharge is higher compared to conventional car batteries. There are a range of different batteries which can be used each having its own advantages and disadvantages and must be selected while keeping the application in perspective. Ambient temperature and the time of discharge also determine the batteries charge holding capacity and must be considered in the design of a reliable solar PV system. The number of cycles (i.e. life of battery) that the battery is capable of withstanding under the anticipated environmental conditions and also based on the applications must be communicated with the customer. The life of the battery must be accurately determined as this is one component which has to be replaced at least 3 to 4 times within the life of the PV system thus represents a recurring cost.  Mediocre PV system designers fail to understand that cutting corners while selecting batteries is detrimental to the economics of a Solar PV system.
The charge controller and inverters is considered as the heart and brain of the solar PV system, sizing both components adequately based on a number of factors is crucial for the long reliable life of the PV system. The charge controller can only be sized once the max and min current/voltage output from the PV array is determined and the inverter can only be sized once the load wattage dynamics are understood. Detailed understanding of the PV performance is necessary to size the right charge controller.
Companies in Pakistan currently selling solar PV systems are designing these systems without any regard to the technicalities associated with executing such a task, most are primarily traders willing to get the most out of the sustainable energy hype. The end result is that people end up accusing the technology for not being able to cater their power demands. This makes the renewable energy market even more challenging to work in. Thus people should be aware whether the solar solution provider they go to is capable of providing a solution which incorporates all the above factors. The government should also take appropriate steps to ensure people get the necessary training and awareness.

Tsunami foresight Normative science perspectives

 
SINCE THE day humans came to being and started thinking about new techniques to survive on earth, the only planet we know has life, we humans have been doing foresight of some sort. We try to avoid the upcoming hazards in one way or another e.g. we try to keep ourselves safe from diseases, we try to keep ourselves healthy through stockpiling the food, we hoard luxuries by calling them necessities e.g. oil, gold, silicon etc. Humans are clever beings as they are observed to be the only ones manipulating the nature. They try to come up with sophisticated technologies to gear up their joy and security needs through different tactics of invention and innovation in addition to consensus with no regards to laws of nature. Often we hear this that we do we put on clothes, why do we do what our forefathers been doing for ages, why women take care of babies at home and men working outside, why do we give charity for the ones who suffer are only because of their wrong doings….
We humans have more tendency to think about the future for securing the things which we fear about and to giving ourselves liberty from fears and restrictions. Tsunamis are one of our fears which haunt us and we think how to take care of these which are completely unpredictable. The tendency to plan for secure future is motivating us to do foresight about these natural hazards though there is common belief that it can’t be controlled by humans but God. Still being the second to God on earth efforts are on to control and foresee the things we can’t see.
Tsunamis are earthquakes which are due to the movement of the tectonic plates deep down in earth’s crust. The movement is always for a reason but how to predict that now it is going to happen. Can we really foresight the future Tsunami or earthquake thus accordingly taking actions to avoid the damage? Do we have a technology in our mind which keeps track of these movements ultimately resulting into a simulation which results in a model demonstrating the damage which may occur due to it? Or is it just for the sake of taking actions to avoid all those actions which result in Tsunamis and earthquakes? The very common person will say that we need to overcome the damage by living a simple life, keeping away from sea, always living like a traveler so that damage, if occurs, should be minimum. Some would be there who will try to make their buildings like ships so that if it ever happens then some transformation of ship like homes may occur to save from damage by going under the ground or transforming into a vessel which is submarine like.
We have strong faith persons who believe that it is because of our deeds. We do good and the result is to have good. For bad ones, troubles and issues are always ready to happen just in a moment. These believers may have assumptions but science needs to come up with a reasoning for that. Do we have a bad impact on our environment due to a bad deed which somehow impacts our earth’s crust thus making the plates to move to at least generating a potential to make them move in near or far future? Is there a linkage between the deeds and our natural hazards? Can we come up with a research project that a common type of deed resulted into a disaster which was all natural and was out of control of human beings? May we not the study the deeds or common habits of a place where these natural disasters occur? Its all meta-physics which is based on spirits. Religions do have space for this but can science come up with a proof or a linkage between the two: Tsunamis’ Science & Normative Science.

Enhanced capacity must to handle climate change

 
Environment has become a national emergency and a non-traditional security threat, which necessitates enhanced research capacities to cope with daunting challenges of environmental degradation and climate change.
Researchers stressed this during a workshop “Environmental and Natural Resource Economics” recently organized by SDPI and the South Asian Network for Development and Environmental Economics (SANDEE), Nepal.
As many as 15 young researchers from Pakistani universities mastered the art of research in environmental economics during the 4-day training workshop.
The workshop provided practical training and participants developed research proposals under the guidance and assistance of master trainers.
On the occasion, participants proposals submitted to SANDEE would be considered for grants to pursue various environmental projects across Pakistan. Workshop also featured policy lectures that highlighted political economy of climate change reforms in Pakistan.
“Workshop is aimed at building research capacities of young Pakistani researchers in area of environment and natural resource economics,” said Atif Khan, Senior Capacity Building Specialist at SDPI in his remarks.
Talking of building capacities in research, Priya Shaymsundar, Director SANDEE, said that SANDEE is providing grants to young university researchers for working on new and innovative research projects in the area of environmental economics.
Speaking at concluding session, Dr. Vaqar Ahmed, Head Economic Growth Unit, SDPI pointed out the disconnect between policy and research and said, Pakistani academia is producing research which is socially irrelevant.

Say no to foods contaminated with parasites

 
FOOD BY definition is any substance either flora or fauna that when consumed provides nutritional support for the body in an effort to produce energy, maintain life, or stimulate growth. It contains nutrients inevitable for life, such as fats, carbohydrates, proteins, minerals, or vitamins. Since ancient times, people secured food through two methods – hunting and stocking – that are still being practiced. Today, food industry has become the most inevitable industry feeding more than seven billion people.
Legitimacy of food industry in Pakistan has become questionable since the past few decades which might be associated with the increasing trend of urbanization. There is a set of four laws that specifically deals with food safety. Three of them directly focus the issues related to food safety, while the fourth namely, the Pakistan Standards and Quality Control Authority Act, is indirectly involved.
Food-borne infectious diseases are well recognized and are becoming more common in human beings. Predominantly bacterial, fungal and viral infections have been linked with the food-borne infections, while parasitic infections have received less attention. The significance of parasitic food-borne diseases is generally under-recognized in the developing world which might be due to inadequate systems of diagnosis, monitoring and or reporting. Factors that may explain the emergence of some zoonotic parasitic diseases are: (a) international marketing of food, (b) increased frequency of travel, (c) increased population of susceptible hosts because of aging, malnutrition, infection and other underlying medical conditions, (d) changes in life style, such as the increase in the number of people eating foods available in restaurants, hotels and famous fast food icons as well as from street food sellers who do not always respect food safety.
Amid many developing countries, (a) inappropriate sewage system, (b) practice of watering the vegetables with sewage water and (c) draining of sewage water in the canals, rivers and seas, have increased the frequency of parasitic infections transmissible through fecal contamination of food stuff. It is often difficult to associate an outbreak with a particular food item. The predominant modes of transmission of parasitic diseases include: water, meat, milk, eggs and vegetables contaminated with sewage water have been comprehensively discussed below.
Among emerging water-borne parasitic infections that may be acquired by food include infections of Cyclospora spp., Giardia spp., Cryptosporidium, Fasciola, Fasciolopsis, Echinococcus (E.) granulosus, and E. multilocularis. Infective stages of these parasites shed in the environment with feces of infected animals, can contaminate food stuffs such as vegetables, fruits and fruit juices. Of particular concern in industrialized countries are water-borne protozoa infections, which are difficult to control, given their high level of resistance to environmental conditions.
Milk is a complete diet for all age groups; however, major portion of milk is consumed by infants and growing age children. So, milk-born parasitic problems are of significant health and food safety issue for young age group. In milk-borne parasites infections mainly include Enterobious spp., Toxoplasma gondii and T. solium. Vertical transmission of Ascaridia galli in the eggs is an alarming situation for the egg consumers with special reference to those body builders in rural areas using raw eggs as a source of protein.
There are some serious issues facing developing countries including: (a) little customer understanding of food safety issues, (b) fragmented industry, (c) small and unorganized sector possessing a major number of food processing units, (d) unskilled food handlers, (e) diversity of food products, (f) inadequate laboratory testing infrastructure and conventional practices of storage and carrying of food etc.
Culinary habits of primarily the sea foods in countries other than India and Pakistan play a major role in the exposure to these zoonotic parasites to human population. Particularly, Chinese raw sea foods have been reported to be an important source of zoonotic infections in humans. The increasing demand for food, particularly in the developing countries like Pakistan will lead to an increase of livestock, agriculture products, poultry and fish production and an intensification of the production systems. With the passage of time, the rise in general public concern over security of the food chain and food safety has helped to focus more attention on zoonotic parasites.
Monitoring and control of food-borne parasites can be done using modern risk assessment tools including: (a) monitoring of water and food utilizing new technologies e.g. serological and molecular approaches, (b) health education, (c) social and economic development, and (d) proper deworming, vaccination and prophylactic mass treatment of food animals.
Followings are some golden rules proposed by the WHO for the prevention of food-borne zoonosis at the consumer’s level: (a) adequate food processing, (b) proper cooking (c) eating fresh food, (d) proper storage (e) thorough reheating, (f) no contact between raw & cooked food, (g) washing hands before eating, (h) cleanliness of food preparation surfaces, (i) protection of food from pests, and (j) using potable water for cooking and washing of food stuff. Implementation of these rules in the developing countries at the consumer’s end is the need of hour. However, the policy makers may prioritize the agenda to include: (a) health awareness campaign about food procurement and processing by the consumers, (b) encouragement of cultivation of transgenic crops and food animals biologically resistant to parasitic and other infections, (c) provision of treated water with hazard free chemical to farmers for irrigation of crops, vegetables and fruits. In addition, 95 per cent of our population is still deprived of pure drinking water which needs to be highlighted and ensured to reduce major food-borne infectious threats to the nation.

Italy funds olive trees plantation in Pakistan

 
 The Pakistan Agricultural Research Council (PARC) has started a project “Promotion of Olive Cultivation for Economic Development and Poverty Alleviation” in the country with the financial assistance of Italian government.
According to details, under this project thousands of acres olive saplings will be planted in Punjab, Khyber Pakhtoonkhwa, FATA and Baluchistan.
In this connection, researchers and olive growers have recently organised an Olive Plantation Day at Barani Agricultural Research Institute (BARI) Chakwal.
BARI has established an olive nursery for distribution of olive plants to growers in the region. PARC and BARI are engaged in making every effort to promote olive cultivation in the country.
While addressing on this occasion, scientists emphasized on the importance of olive and olive cultivation in Pakistan. The Punjab government has already declared the Pothowar area as Olive Valley. On this occasion olive plants were distributed among olive growers of the Pothwar Region in order to popularize the olive cultivation in the area.
Dr. Muhammad Munir Goraya, Senior Director (Crops) PARC/National Project Director is coordinating and supervising the activities of this project at national level.

Pakistan heading to better utilizing space technology


 

THE TREMENDOUS developments in space sector have enabled mankind to probe deeper into space and helped in developing better understanding of the universe. Pakistan has been utilising the applications of space science and technology in various fields.
“The approval of Pakistan’s Space Vision 2040 has placed the country among the space faring nations,” said Chairman SUPARCO, Maj Gen (Retd) Ahmed Bilal while addressing the three-day National Space Conference held here in the federal capital.
The conference was organized by Pakistan Space & Upper Atmosphere Research Commission (SUPARCO). Senator Sughra Imam, Advisor Policy Research Wing was the chief guest of the ceremony.
The Chairman SUPARCO said that the conference has been organised with the objective to exchange ideas and work undertaken by various national and international experts.
He further said that Pakistan has embarked on an ambitious plan for achieving self-reliance in space science, technology and applications through cooperation and collaboration with other – faring nations. He appreciated that a large number of government departments are availing SUPARCO’s services and thus paving the way for sustainable national development.
Eminent experts from Egypt, Nigeria, Ukraine, Canada, Malaysia, WHO, Asia Pacific Space Cooperation Organization, United Nations Committee on the Peaceful Uses of Outer Space (UN COPUOS) and many more attended the event.
Presiding over the ceremony, Sughra Imam read out President’s message. She quoted, “The government encourages all national agencies to use advanced technologies, and in particular space technology and applications, for improving efficiency, enhancing transparency and reducing cost in their day to day work.”
SUPARCO has played its part well and has made significant contributions to the socio economic development of Pakistan. In recent years, data and services available from space-based assets as well as ICTs have been used most effectively in the field of agriculture, water resource management, land use mapping and disaster management.
A number of sessions were also held during the conference. The first session focused on “National Space Programs – Pakistan & other developing countries” in which the Chairman SUPARCO briefed about the Pakistan’s national space programme.
Speakers from National Space Research & Development Agency (NASDRA) Nigeria, National Authority for Remote Sensing& Space Sciences (NARSS) Egypt, and Embassy of Ukraine in Pakistan, presented national space programmes of Nigeria, Egypt and Ukarine respectively.
The other sessions of the conference focused on the “Space Technology Applications – Tele – Health applications in which the representative from WHO briefed the role of WHO in promoting space technology for health and tele-health initiatives around the world.
The concluding session focused on the need to support space education activities to enrich youth development and to take a leading role in activities on space education and awareness including educators’ training and development.
Aisha Jagirani from SUPARCO highlighted the Space Education & Awareness activities in Pakistan, and Javaid Younas, Virtual University gave a briefing on Tele-educaton initiatives in Pakistan.
On the occasion, scientist from SUPARCO presented a paper on “Dengue Surveillance using Spatial Technology”. An overview of the “Tele-health initiatives and activities in Pakistan” was presented by the Director, Telemedicine and e-health training center, Holy Family hospital Pakistan.
Presiding over the concluding ceremony, the chief guest, Federal Minister for Defence, Syed Naveed Qamar, said that with the advancements in satellite-enabled services and space technology-related innovations, there is a greater need for investment in space technology and applications. Cooperation and joint ventures among countries is becoming more imperative.
The conference highlighted the significance of international cooperation between developing countries for having effective space programmes.

Iran Bans Google Search, Gmail In Favor of Domestic Network

     
iran flag thumb Iran Bans Google Search, Gmail In Favor of Domestic NetworkIran has decided to block Google Search and Gmail in the country to switch its citizen on to a domestically developed National Internet Network with an aim to improve country’s cyber security, however, it is said that ban was reaction to recently produced anti-Islamic movie.
The announcement, that came from a government deputy minister on state television, said that Google’s search engine and its email service would be blocked in the country, with no indication of whether it is a temporary blockade or permanent in nature.
Authorities said that they have informed the citizen through text messages on their cell phones.
The Iranian Students’ News Agency (ISNA) said Google ban was connected to the anti-Islamic film posted on the company’s YouTube site which has caused outrage throughout the Muslim world. There was no official confirmation though.
Iran is considered as an overly censored state, which regularly keeps blocking internet websites.
Iranian government argues that these measures are taken to safeguard its cyberspace which has been attacked by American and other anti-Iranian nations in the past.
Iran has been long pushing to establish its Iranian Internet system which, it believes, will offer better security to nation and its citizen. Currently this National Internet System is used by state agencies however efforts are being made to bring common citizens on to the system.
Iran has been hit with computer worms and virus in past that had targeted and impacted its nuclear program.

Naltar valley

naltar valley (2)

Naltar valley is very close to Gilgit and Hunza. It is 40 km (25 miles) from Gilgit and can be accessible by jeeps. Naltar is a forested (pine) village recognized for its flora and fauna and terrific mountain scenery. Now a days the communication is obtainable by the efforts of Pakistan army signal corps (SCO special communication organization). There are ski lifts under the ‘Ski Federation of Pakistan’. Transportation facility is there from Gilgit to Naltar but during a specific time after that one can arrange on his own.
In the Naltar valley there is a lake known as ‘Bashkiri Lake’. It is at a distance of 32 kilometers (20 miles) from Naltar Bala. This lake gives a breathtaking and amazing look both in the summer and as well as in winter. The road from village to “Lake” is nonmetallic and narrow along with a stream throughout this road coming from the mountains. A lot of attractive scenes are there along this road. In winter it is almost impossible to reach the lake through any vehicle due to the snow (10 to 15 feet high) on the road.

Hunza Valley

rakaaposhi

The Valley of Hunza is a gigantic mountainous valley in Gilgit division. It is located at the north of the Hunza River, at an elevation of around 2,500 metres (8,200 feet). The area of Hunza is about 7,900 square kilometres (3,100 sq mi). Karimabad (formerly named Baltit) is the core town, which is also a very famous among tourists.
Hunza Nagar one of striking and attractive place in region. It offers historical view of Altit Fort, Baltit Fort, Ganish fort and skyscraper peaks. It is roofed by high peaks namely Rakaposhi (7,788 m), Ladyfinger (6,000 m) and Darmyani Peak (6,090 m) and Lady Finger. In Hunza four major languages are spoken, Shina in Lower Hunza, Burushaski in Central Hunza , Wakhi in Upper Hunza and Burushaski in Nagar.
Nager Valley is very well-known for hunting animals such as Marco Polo sheep, brown bears, snow tigers etc. Gulmet, Faker and Bar are the famous tourist attraction places in Nagar. Golden peak Rakahposhi is located in the Nagar Valley.

Kaghan Valley

A-View-Of-Makra-Pahari

The Kaghan Valley is sited in the Northern Pakistan north-east of Mansehra District, Khyber Pakhtunkhwa Province, Pakistan. It catches the attention of many visitors from around the countryside.
The Kaghan Valley’s distant peaks, lakes, vales, falls, streams, and glaciers remain in a perfect state, with a number of within Lulusar-Dudipatsar National Park. The vale is a primary destination all through summer, from May through September. In May the temperature stuck between a maximum of 11 °C and a minimum of 3 °C.
From the middle of July to the closing stages of September the Naran-Babusar road ahead of Naran Valley is open through Kaghan Vale and over Babusar Pass. Access is limited during the downpour and winter periods. The Kaghan region can be reached by roads via the towns of Balakot, Abbottabad, and Mansehra on the Karakoram Highway. In Balakot, one may find buses and other transports to reach Naran Village and the valley.
The road from Balakot rises alongside the Kunhar River through gorgeous jungle and the towns of Paras, Mahandri, Jared and Shinu. The vale narrows beside this part and the sights are intimate panorama until raising the pass when the neighboring mountains come into expansive view. One locale enclosed by mountains and jungles and renowned for its sight is Shogran, east of the main Kunhar River. The picturesque Payee Lake, and Malika Parbat, Siri Payee, and Makra Peaks are in close proximity.

Naran

A three hours drive from Shogran takes one to Naran. It is a small visitor village opens only during the sightseer season of May to September. The rest of the time it is roofed with snow. All guests come to Naran to pay a trip to the Saif-ul-Muluk Lake (at an elevation of 10,500 feet) six miles east of town. If the road is open transportation by jeep can be agreed. If the road is closed, it is an effortless, steady 3-hour march, and the lake is a charming spot for a picnic.

Karachi

 karachi Mohatta Palace

Karachi is the largest city of Pakistan. It is the main seaport and the main financial hub of Pakistan. According to 1998 poll its population is approximately 16 to 18 million. Karachi is one of the world’s widespread cities in terms of inhabitants and the 10th leading urban aggregation. It is Pakistan’s leading center of banking, industry, and trade. And is habitat to Pakistan’s leading corporations witch are concerned with textiles, shipping, automotive industry, entertainment, the arts, fashion, advertising, publishing, software development and medical research. The city is considered as nucleus of higher education in South Asia and the wider Islamic world.
Karachi was initially the capital of Pakistan until the construction of Islamabad. Karachi is ranked as a Beta world city. There are sites of seaport Karachi and Port Bin Qasim, two of the region’s major and busiest ports. After the independence of Pakistan, hundreds of thousands Urdu-speaking migrants or Muhajirs from India, East Pakistan (later Bangladesh) and other parts of South Asia came to settle in Karachi, that’s why the population of this city was increased dramatically.
The area of this city is stretch over 3,527sq km (1,362 sq miles), nearly four times bigger than Hong Kong. Karachi is known by different names like, “City of Lights” and “The bride of the cities” for its dynamism, and the “City of the Quaid”. The leader and founder of Pakistan Quaid-e-Azam (Muhammad Ali Jinnah), was born and buried in Karachi.

Las Danna

 Las-Danna-ajk

A 15 kilometers long metalled road from Bagh takes one to Las Danna. Las Danna is about 8612 feet above sea level. Las Danna is famous for its fascinating landscape and natural prettiness. From Las Danna, three roads branch off the core road, i.e. Mahmood Gali-Palangi, Haji Pir- Aliabad and Abbasspur- Hajira respectively. A visitor’s rest house is available over there for the housing.

Saiful Muluk Lake

 saifalmalook1

The Saiful Muluk Lake is situated at the northern end of the Kaghan Valley near Naran. It is in the north east of Mansehra District of the Khyber Pakhtunkhwa Province, Pakistan. Saiful Muluk Lake is sited at an elevation of 10,578 meters (3,224 feet) above sea level it is among one of the uppermost lakes in Pakistan.
The lake is reachable by a 14 kilometers road from Naran throughout the summer season. On foot, the trek from Naran to the Saiful Muluk Lake takes about 1-2 hours. The stream is crystal clear with a slight green tone. The clearness of the stream comes from the various glaciers all around the lofty basin which nourish the lake. The climate here is fair during day time while the temperature falls to negative degrees at nighttime.
A fairy tale called Saiful Muluk, written by the famous Sufi saint Mian Muhammad Bakhsh, is related with the Saiful Muluk Lake. It is the tale of prince of Persia who fell in love with a fairy princess at the lake. The impact of the lake prettiness is of such degree that natives believe that fairies come down to the lake in full moon.
The Guardian ranked the Lake Saiful Muluk as the 5th Greatest Tourist Spot in Pakistan. Mansehra District has had a prosperous tourism industry in the past because of its numerous mountain ranges and the Saiful Muluk Lake.

Pishin Valley

Hanna Lake

The Pishin Valley is 50 kilometers away from Quetta, it is filled with abundant fruit orchards. These orchards are irrigated by ‘karez’. There is yet one more magnetism of cool waters, i.e. man-made lake with Bund Khushdil Khan. A broad variety of ducks gives tempting loveliness throughout winters. The celebrations include a lively programme of folk dancing by thousands of contributors from diverse areas. Horse jumping, trick horse riding, trick motor cycle riding, dare-devil motor car driving and a dog and hare competition are among the tourist attractions of the celebration. The most important magnetism of the show, however, remains the inspiring exhibit of the best available specimens of Pakistani domestic animals. As the sun sets over the inspiring Fortress Stadium, the site of the show, fireworks show, military tattoos and brass band display liven up the evenings and mesmerize the audience.

LinkedIn to Setup Offices in Pakistan

     
LinkedIn LinkedIn to Setup Offices in PakistanLinkedIn, world’s largest networking website for professionals, is making its way to start operations in Pakistan by setting up local offices in the country, reported Express Tribune citing Saleem H Mandviwalla, Chairman for Board of Investment (BOI) in Pakistan.
Chairman BOI said that board had detailed discussions with a LinkedIn Director for the move.
Mr. Saleem Mandviwalla further revealed that LinkedIn is likely to start setup offices in Pakistan by year end or early next year with initial investment of USD 10 million. “This will lead to the setting up of LinkedIn’s proper operations in Pakistan”, the chairman said.
Mr. Salaeem further mentioned that LinkedIn has shown great interest in Pakistani market which poses huge potential for internet companies with its 30 million internet users and growing.
Stats say that there are 1.2 million LinkedIn users in Pakistan, which largely consists of business professionals and decision makers of the country.
Board of Investment has said that it will use LinkedIn as an effective tool and an important platform for its business development activities.

Smart Grid for Pakistan

     
smart grid thumb Smart Grid for Pakistan!
The whole world is buzzed by the smart grid concept and with each coming day, we are witnessing countries going for modernizing their outdated power grid infrastructure. But what is really this Smart grid phenomenon and how really is it important for a developing country like Pakistan, which is currently facing the worst energy crises of all times.
Smart grid is modern version of electric power grid infrastructure which uses communication technologies to enhance power generation, delivery and utilization.
For nontechnical fellows, electricity generated at power stations is brought back by network of power lines to cities where it is fed to our homes. Electric Meters installed in our homes, ”Black Dabbas with a rotating disc” record daily usage which is collected by the inefficient network of Wapda people once each month. This is the bottleneck in our electric infrastructure network since Wapda doesn’t have real time picture of load usage in individual homes rather it is able to get it once every thirty days.
Smart Grid apart from revolutionizing other aspects of electric infrastructure copes with this problem by communicating these meters with the main control room (aka Advanced Metering Infrastructure) so that electric utilities can have clear idea how the load generated is being utilized and for hunting down line losses due to “Kunda Culture” where every gentleman who pays his bills regularly has to bear the expenses of electric thieves.
Apart from this, load forecasting is possible for optimizing power generation. Another important aspect of Smart Grid is integration of renewable energy sources into the power grid. Solar Panels installed in homes can generate energy of their own and excess can be sold back to the electric utilities.
These ideas seem to be very captivating especially during this period when Pakistan is facing 12-18 hours load shedding. However upgrading power grid infrastructure is not a matter of seconds. It requires huge investment and careful planning to modernize the power grid which was designed decades ago.
The fundamental objective in laying out power grid infrastructure is to analyze load demand and predict its value for minimum five years. Unfortunately Pakistan is stuck at this point where our demand is exceeding the supply. It is unclear whether it’s due to poor planning or it’s politically jeopardized but the point is that it is the people and industry who are being victimized. If we are able to surpass this barrier then we should not sit contented keeping hands on hands and waiting for progressive countries of the world including our beloved Neighbor (Which by today has started conquering the space) to direct us for smart grid path rather we should look forward to it as soon as possible so that we are able to enjoy benefits of this phenomenal technology which are huge energy savings, less energy bills, fewer blown transformers and efficient load supply scenario.
Writer is a research Student in Malaysia working on Smart Power Grid Infrastructure

Zionist Led Anti-Islam Campaigns and How Best to React

IQRA BISMI RABBIKAL LAZI KHALAQ thumb Zionist Led Anti Islam Campaigns and How Best to React

They have done it again and they will most certainly keep propagating such anti-Islamic campaigns in future. Muslims are mostly unsure on how best to react to such incidents.
From blocking YouTube and Facebook to demanding the removal of anti-Islamic content, we have tried it all and quite frankly nobody listened.

Boycott and Ban is Not a Long-Term Solution

Muslims need to use internet and technology in much the same way as everyone else and this is one thing that enemies of Islam are well aware of. There is a pattern of use of new media in recent anti-Islamic campaigns (remember Facebook and Twitter support of draw Muhammad day). They are hitting only where it hurts most.
Unlike using traditional media (as they did in 2005) now they are using internet to provoke Muslims, which they will keep following until they get a better tool than internet to spread their anti-Islam agenda.
The dual standard and bias western media allows anti-Islamic content under the umbrella of “freedom of speech” and this has been discussed many times.
I have always supported the ban on social media websites in past, as many of you did, when they refused to respect the sentiments of Muslims, but I think this is not a long-term solutions.
The use of these services is now more of a necessity than a choice. You can’t live in a world without using these technologies or you will be left further behind. Simply assume a life without Gmail, Google, Android or iOS – what else you will be left with? Isolation only.
The social networks are reality of new world order and Jewish lobby is influencing and controlling the new media as they always controlled the traditional media. If you don’t know only 6 Jewish companies control 96% of world media.
And they are using this control to shape up our habits, trends, thinking, culture and to provoke us.
Before you decide to criticize me of blaming Jews for all our problems, know some facts about the movers and shakers of new media by clicking this link.
Here let me ask you one question:
With such kind of power and mission, do you still think that Google, Facebook and YouTube will listen to Muslim concerns and will take action against anti-Islam content?

Long Term Solution

The only long-term solution for Muslims is to play a vital role in new media by competing with the today’s technology giants. Muslim entrepreneurs have to work smart to tip the new media scale in favor of Muslims. Why we have not seen a single web 2.0 success story from Muslim world? You may ask. In my opinion, we lack at two fronts.
“Remarkable” Ideas
The most important ingredient for the site that will be the next face of web 2.0 is a unique idea that is remarkable (worth talking about). People move from one service to another every now and then (remember friendster, orkut, myspace, Bebo etc) so new “remarkable” ideas will eventually replace Facebook, Twitter and YouTube. These new social media and network platforms will keep replacing the old ones. The future of content distribution will remain online for many years to come.
Raising Venture Capital
Building a site of epic magnitude will come at a cost. That is the reason why venture capital firms exist. You will need to hire great engineers and designers in industry to launch your idea, meet scaling up challenges and need money to rent office space and other regular expenditures of company.
In short, you need the seed capital from angel investors for your startup. Raising capital is not easy unless you are in Silicon Valley, have connections with VCs beside a great idea to pitch. Things can change if Muslim billionaires consider investing in local startups instead of buying luxury private jets.

The Immediate Response:

Convey the Message of Islam using Available Resources

While sharing Quranic verses and Hadiths on Facebook or YouTube is good, we should concentrate on spreading the message of Islam to non-Muslims. Getting more ‘Likes’ or ‘Shares’ does not do justice to Islam. Here a few ideas on how we can use internet against the evil that is being spread in the name of ‘freedom of speech’.
  • Provide resources for people who want to know about Islam (Hint: research what people want to know about Islam using Google trends/Google insights for search etc and build sites around those topics).
  • Reply to the questions that people are raising about Islam on forums, YouTube and other social media websites. Do not let people spread evil against Islam and get away with it, reply with logic.
  • Share inspirational stories of newly converted Muslims (there are hundreds of them online).
  • Share the Islamic stance and views on issues of modern world including terrorism, justice, equality and human rights.
Internet is double-edged sword. If Zionists are using new media to propagate anti-Islam campaigns, we can use their weapons to show the world the real message of Islam and reach to the masses.
At the same time, we have to work smart to play our role in new media. It is not a matter of choice anymore and censorship is not a permanent solution.

Conclusion:

Boycotting them isn’t the solution. Learn, regroup, unify and then compete with them.

Introduction To SEO

Introduction to seo

   This document is intended for webmasters and site owners who want to investigate the issues of seo (search engine optimization) and promotion of their resources. It is mainly aimed at beginners, although I hope that experienced webmasters will also find something new and interesting here. There are many articles on seo on the Internet and this text is an attempt to gather some of this information into a single consistent document.

   Information presented in this text can be divided into several parts:
   - Clear-cut seo recommendations, practical guidelines.
   - Theoretical information that we think any seo specialist should know.
   - Seo tips, observations, recommendations from experience, other seo sources, etc.

1. General seo information

1.1 History of search engines
   In the early days of Internet development, its users were a privileged minority and the amount of available information was relatively small. Access was mainly restricted to employees of various universities and laboratories who used it to access scientific information. In those days, the problem of finding information on the Internet was not nearly as critical as it is now.

   Site directories were one of the first methods used to facilitate access to information resources on the network. Links to these resources were grouped by topic. Yahoo was the first project of this kind opened in April 1994. As the number of sites in the Yahoo directory inexorably increased, the developers of Yahoo made the directory searchable. Of course, it was not a search engine in its true form because searching was limited to those resources who’s listings were put into the directory. It did not actively seek out resources and the concept of seo was yet to arrive.

   Such link directories have been used extensively in the past, but nowadays they have lost much of their popularity. The reason is simple – even modern directories with lots of resources only provide information on a tiny fraction of the Internet. For example, the largest directory on the network is currently DMOZ (or Open Directory Project). It contains information on about five million resources. Compare this with the Google search engine database containing more than eight billion documents.

   The WebCrawler project started in 1994 and was the first full-featured search engine. The Lycos and AltaVista search engines appeared in 1995 and for many years Alta Vista was the major player in this field.

   In 1997 Sergey Brin and Larry Page created Google as a research project at Stanford University. Google is now the most popular search engine in the world.

   Currently, there are three leading international search engines – Google, Yahoo and MSN Search. They each have their own databases and search algorithms. Many other search engines use results originating from these three major search engines and the same seo expertise can be applied to all of them. For example, the AOL search engine (search.aol.com) uses the Google database while AltaVista, Lycos and AllTheWeb all use the Yahoo database.

1.2 Common search engine principles
   To understand seo you need to be aware of the architecture of search engines. They all contain the following main components:

   Spider - a browser-like program that downloads web pages.

   Crawler – a program that automatically follows all of the links on each web page.

   Indexer - a program that analyzes web pages downloaded by the spider and the crawler.

   Database– storage for downloaded and processed pages.

   Results engine – extracts search results from the database.

   Web server – a server that is responsible for interaction between the user and other search engine components.

   Specific implementations of search mechanisms may differ. For example, the Spider+Crawler+Indexer component group might be implemented as a single program that downloads web pages, analyzes them and then uses their links to find new resources. However, the components listed are inherent to all search engines and the seo principles are the same.

   Spider. This program downloads web pages just like a web browser. The difference is that a browser displays the information presented on each page (text, graphics, etc.) while a spider does not have any visual components and works directly with the underlying HTML code of the page. You may already know that there is an option in standard web browsers to view source HTML code.

   Crawler. This program finds all links on each page. Its task is to determine where the spider should go either by evaluating the links or according to a predefined list of addresses. The crawler follows these links and tries to find documents not already known to the search engine.

   Indexer. This component parses each page and analyzes the various elements, such as text, headers, structural or stylistic features, special HTML tags, etc.

   Database. This is the storage area for the data that the search engine downloads and analyzes. Sometimes it is called the index of the search engine.

   Results Engine. The results engine ranks pages. It determines which pages best match a user's query and in what order the pages should be listed. This is done according to the ranking algorithms of the search engine. It follows that page rank is a valuable and interesting property and any seo specialist is most interested in it when trying to improve his site search results. In this article, we will discuss the seo factors that influence page rank in some detail.

   Web server. The search engine web server usually contains a HTML page with an input field where the user can specify the search query he or she is interested in. The web server is also responsible for displaying search results to the user in the form of an HTML page.

2. Internal ranking factors


   Several factors influence the position of a site in the search results. They can be divided into external and internal ranking factors. Internal ranking factors are those that are controlled by seo aware website owners (text, layout, etc.) and will be described next.

2.1 Web page layout factors relevant to seo
2.1.1 Amount of text on a page
   A page consisting of just a few sentences is less likely to get to the top of a search engine list. Search engines favor sites that have a high information content. Generally, you should try to increase the text content of your site in the interest of seo. The optimum page size is 500-3000 words (or 2000 to 20,000 characters).

   Search engine visibility is increased as the amount of page text increases due to the increased likelihood of occasional and accidental search queries causing it to be listed. This factor sometimes results in a large number of visitors.

2.1.2 Number of keywords on a page
   Keywords must be used at least three to four times in the page text. The upper limit depends on the overall page size – the larger the page, the more keyword repetitions can be made. Keyword phrases (word combinations consisting of several keywords) are worth a separate mention. The best seo results are observed when a keyword phrase is used several times in the text with all keywords in the phrase arranged in exactly the same order. In addition, all of the words from the phrase should be used separately several times in the remaining text. There should also be some difference (dispersion) in the number of entries for each of these repeated words.

   Let us take an example. Suppose we optimize a page for the phrase "seo software” (one of our seo keywords for this site) It would be good to use the phrase “seo software” in the text 10 times, the word “seo” 7 times elsewhere in the text and the word “software” 5 times. The numbers here are for illustration only, but they show the general seo idea quite well.

2.1.3 Keyword density and seo
   Keyword page density is a measure of the relative frequency of the word in the text expressed as a percentage. For example, if a specific word is used 5 times on a page containing 100 words, the keyword density is 5%. If the density of a keyword is too low, the search engine will not pay much attention to it. If the density is too high, the search engine may activate its spam filter. If this happens, the page will be penalized and its position in search listings will be deliberately lowered.

   The optimum value for keyword density is 5-7%. In the case of keyword phrases, you should calculate the total density of each of the individual keywords comprising the phrases to make sure it is within the specified limits. In practice, a keyword density of more than 7-8% does not seem to have any negative seo consequences. However, it is not necessary and can reduce the legibility of the content from a user’s viewpoint.

2.1.4 Location of keywords on a page
   A very short rule for seo experts – the closer a keyword or keyword phrase is to the beginning of a document, the more significant it becomes for the search engine.

2.1.5 Text format and seo
   Search engines pay special attention to page text that is highlighted or given special formatting. We recommend:

   - use keywords in headings. Headings are text highlighted with the «H» HTML tags. The «h1» and «h2» tags are most effective. Currently, the use of CSS allows you to redefine the appearance of text highlighted with these tags. This means that «H» tags are used less than nowadays, but are still very important in seo work.;
   - Highlight keywords with bold fonts. Do not highlight the entire text! Just highlight each keyword two or three times on the page. Use the «strong» tag for highlighting instead of the more traditional «B» bold tag.

2.1.6 «TITLE» tag
   This is one of the most important tags for search engines. Make use of this fact in your seo work. Keywords must be used in the TITLE tag. The link to your site that is normally displayed in search results will contain text derived from the TITLE tag. It functions as a sort of virtual business card for your pages. Often, the TITLE tag text is the first information about your website that the user sees. This is why it should not only contain keywords, but also be informative and attractive. You want the searcher to be tempted to click on your listed link and navigate to your website. As a rule, 50-80 characters from the TITLE tag are displayed in search results and so you should limit the size of the title to this length.

2.1.7 Keywords in links
   A simple seo rule – use keywords in the text of page links that refer to other pages on your site and to any external Internet resources. Keywords in such links can slightly enhance page rank.

2.1.8 «ALT» attributes in images
   Any page image has a special optional attribute known as "alternative text.” It is specified using the HTML «ALT» tag. This text will be displayed if the browser fails to download the image or if the browser image display is disabled. Search engines save the value of image ALT attributes when they parse (index) pages, but do not use it to rank search results.

   Currently, the Google search engine takes into account text in the ALT attributes of those images that are links to other pages. The ALT attributes of other images are ignored. There is no information regarding other search engines, but we can assume that the situation is similar. We consider that keywords can and should be used in ALT attributes, but this practice is not vital for seo purposes.

2.1.9 Description Meta tag
   This is used to specify page descriptions. It does not influence the seo ranking process but it is very important. A lot of search engines (including the largest one – Google) display information from this tag in their search results if this tag is present on a page and if its content matches the content of the page and the search query.

   Experience has shown that a high position in search results does not always guarantee large numbers of visitors. For example, if your competitors' search result description is more attractive than the one for your site then search engine users may choose their resource instead of yours. That is why it is important that your Description Meta tag text be brief, but informative and attractive. It must also contain keywords appropriate to the page.

2.1.10 Keywords Meta tag
   This Meta tag was initially used to specify keywords for pages but it is hardly ever used by search engines now. It is often ignored in seo projects. However, it would be advisable to specify this tag just in case there is a revival in its use. The following rule must be observed for this tag: only keywords actually used in the page text must be added to it.

2.2 Site structure
2.2.1 Number of pages
   The general seo rule is: the more, the better. Increasing the number of pages on your website increases the visibility of the site to search engines. Also, if new information is being constantly added to the site, search engines consider this as development and expansion of the site. This may give additional advantages in ranking. You should periodically publish more information on your site – news, press releases, articles, useful tips, etc.

2.2.2 Navigation menu
   As a rule, any site has a navigation menu. Use keywords in menu links, it will give additional seo significance to the pages to which the links refer.

2.2.3 Keywords in page names
   Some seo experts consider that using keywords in the name of a HTML page file may have a positive effect on its search result position.

2.2.4 Avoid subdirectories
   If there are not too many pages on your site (up to a couple of dozen), it is best to place them all in the root directory of your site. Search engines consider such pages to be more important than ones in subdirectories.

2.2.5 One page – one keyword phrase
   For maximum seo try to optimize each page for its own keyword phrase. Sometimes you can choose two or three related phrases, but you should certainly not try to optimize a page for 5-10 phrases at once. Such phrases would probably produce no effect on page rank.

2.2.6 Seo and the Main page
   Optimize the main page of your site (domain name, index.html) for word combinations that are most important. This page is most likely to get to the top of search engine lists. My seo observations suggest that the main page may account for up to 30-40% percent of the total search traffic for some sites

2.3 Common seo mistakes
2.3.1 Graphic header
   Very often sites are designed with a graphic header. Often, we see an image of the company logo occupying the full-page width. Do not do it! The upper part of a page is a very valuable place where you should insert your most important keywords for best seo. In case of a graphic image, that prime position is wasted since search engines can not make use of images. Sometimes you may come across completely absurd situations: the header contains text information, but to make its appearance more attractive, it is created in the form of an image. The text in it cannot be indexed by search engines and so it will not contribute toward the page rank. If you must present a logo, the best way is to use a hybrid approach – place the graphic logo at the top of each page and size it so that it does not occupy its entire width. Use a text header to make up the rest of the width.

2.3.2 Graphic navigation menu
   The situation is similar to the previous one – internal links on your site should contain keywords, which will give an additional advantage in seo ranking. If your navigation menu consists of graphic elements to make it more attractive, search engines will not be able to index the text of its links. If it is not possible to avoid using a graphic menu, at least remember to specify correct ALT attributes for all images.

2.3.3 Script navigation
   Sometimes scripts are used for site navigation. As an seo worker, you should understand that search engines cannot read or execute scripts. Thus, a link specified with the help of a script will not be available to the search engine, the search robot will not follow it and so parts of your site will not be indexed. If you use site navigation scripts then you must provide regular HTML duplicates to make them visible to everyone – your human visitors and the search robots.

2.3.4 Session identifier
   Some sites use session identifiers. This means that each visitor gets a unique parameter (&session_id=) when he or she arrives at the site. This ID is added to the address of each page visited on the site. Session IDs help site owners to collect useful statistics, including information about visitors' behavior. However, from the point of view of a search robot, a page with a new address is a brand new page. This means that, each time the search robot comes to such a site, it will get a new session identifier and will consider the pages as new ones whenever it visits them.

   Search engines do have algorithms for consolidating mirrors and pages with the same content. Sites with session IDs should, therefore, be recognized and indexed correctly. However, it is difficult to index such sites and sometimes they may be indexed incorrectly, which has an adverse effect on seo page ranking. If you are interested in seo for your site, I recommend that you avoid session identifiers if possible.

2.3.5 Redirects
   Redirects make site analysis more difficult for search robots, with resulting adverse effects on seo. Do not use redirects unless there is a clear reason for doing so.

2.3.6 Hidden text, a deceptive seo method
   The last two issues are not really mistakes but deliberate attempts to deceive search engines using illicit seo methods. Hidden text (when the text color coincides with the background color, for example) allows site owners to cram a page with their desired keywords without affecting page logic or visual layout. Such text is invisible to human visitors but will be seen by search robots. The use of such deceptive optimization methods may result in banning of the site. It could be excluded from the index (database) of the search engine.

2.3.7 One-pixel links, seo deception
   This is another deceptive seo technique. Search engines consider the use of tiny, almost invisible, graphic image links just one pixel wide and high as an attempt at deception, which may lead to a site ban.

3 External ranking factors


3.1 Why inbound links to sites are taken into account
   As you can see from the previous section, many factors influencing the ranking process are under the control of webmasters. If these were the only factors then it would be impossible for search engines to distinguish between a genuine high-quality document and a page created specifically to achieve high search ranking but containing no useful information. For this reason, an analysis of inbound links to the page being evaluated is one of the key factors in page ranking. This is the only factor that is not controlled by the site owner.

   It makes sense to assume that interesting sites will have more inbound links. This is because owners of other sites on the Internet will tend to have published links to a site if they think it is a worthwhile resource. The search engine will use this inbound link criterion in its evaluation of document significance.

   Therefore, two main factors influence how pages are stored by the search engine and sorted for display in search results:

    - Relevance, as described in the previous section on internal ranking factors.

    - Number and quality of inbound links, also known as link citation, link popularity or citation index. This will be described in the next section.

3.2 Link importance (citation index, link popularity)
   You can easily see that simply counting the number of inbound links does not give us enough information to evaluate a site. It is obvious that a link from www.microsoft.com should mean much more than a link from some homepage like www.hostingcompany.com/~myhomepage.html. You have to take into account link importance as well as number of links.

   Search engines use the notion of citation index to evaluate the number and quality of inbound links to a site. Citation index is a numeric estimate of the popularity of a resource expressed as an absolute value representing page importance. Each search engine uses its own algorithms to estimate a page citation index. As a rule, these values are not published.

   As well as the absolute citation index value, a scaled citation index is sometimes used. This relative value indicates the popularity of a page relative to the popularity of other pages on the Internet. You will find a detailed description of citation indexes and the algorithms used for their estimation in the next sections.

3.3 Link text (anchor text)
   The link text of any inbound site link is vitally important in search result ranking. The anchor (or link) text is the text between the HTML tags «A» and «/A» and is displayed as the text that you click in a browser to go to a new page. If the link text contains appropriate keywords, the search engine regards it as an additional and highly significant recommendation that the site actually contains valuable information relevant to the search query.

3.4 Relevance of referring pages
   As well as link text, search engines also take into account the overall information content of each referring page.

   Example: Suppose we are using seo to promote a car sales resource. In this case a link from a site about car repairs will have much more importance that a similar link from a site about gardening. The first link is published on a resource having a similar topic so it will be more important for search engines.

3.5 Google PageRank – theoretical basics
   The Google company was the first company to patent the system of taking into account inbound links. The algorithm was named PageRank. In this section, we will describe this algorithm and how it can influence search result ranking.

   PageRank is estimated separately for each web page and is determined by the PageRank (citation) of other pages referring to it. It is a kind of “virtuous circle.” The main task is to find the criterion that determines page importance. In the case of PageRank, it is the possible frequency of visits to a page.

   I shall now describe how user’s behavior when following links to surf the network is modeled. It is assumed that the user starts viewing sites from some random page. Then he or she follows links to other web resources. There is always a possibility that the user may leave a site without following any outbound link and start viewing documents from a random page. The PageRank algorithm estimates the probability of this event as 0.15 at each step. The probability that our user continues surfing by following one of the links available on the current page is therefore 0.85, assuming that all links are equal in this case. If he or she continues surfing indefinitely, popular pages will be visited many more times than the less popular pages.

   The PageRank of a specified web page is thus defined as the probability that a user may visit the web page. It follows that, the sum of probabilities for all existing web pages is exactly one because the user is assumed to be visiting at least one Internet page at any given moment.

   Since it is not always convenient to work with these probabilities the PageRank can be mathematically transformed into a more easily understood number for viewing. For instance, we are used to seeing a PageRank number between zero and ten on the Google Toolbar.

   According to the ranking model described above:
   - Each page on the Net (even if there are no inbound links to it) initially has a PageRank greater than zero, although it will be very small. There is a tiny chance that a user may accidentally navigate to it.
   - Each page that has outbound links distributes part of its PageRank to the referenced page. The PageRank contributed to these linked-to pages is inversely proportional to the total number of links on the linked-from page – the more links it has, the lower the PageRank allocated to each linked-to page.
   - PageRank A “damping factor” is applied to this process so that the total distributed page rank is reduced by 15%. This is equivalent to the probability, described above, that the user will not visit any of the linked-to pages but will navigate to an unrelated website.

   Let us now see how this PageRank process might influence the process of ranking search results. We say “might” because the pure PageRank algorithm just described has not been used in the Google algorithm for quite a while now. We will discuss a more current and sophisticated version shortly. There is nothing difficult about the PageRank influence – after the search engine finds a number of relevant documents (using internal text criteria), they can be sorted according to the PageRank since it would be logical to suppose that a document having a larger number of high-quality inbound links contains the most valuable information.

   Thus, the PageRank algorithm "pushes up" those documents that are most popular outside the search engine as well.

3.6 Google PageRank – practical use
   Currently, PageRank is not used directly in the Google algorithm. This is to be expected since pure PageRank characterizes only the number and the quality of inbound links to a site, but it completely ignores the text of links and the information content of referring pages. These factors are important in page ranking and they are taken into account in later versions of the algorithm. It is thought that the current Google ranking algorithm ranks pages according to thematic PageRank. In other words, it emphasizes the importance of links from pages with content related by similar topics or themes. The exact details of this algorithm are known only to Google developers.

   You can determine the PageRank value for any web page with the help of the Google ToolBar that shows a PageRank value within the range from 0 to 10. It should be noted that the Google ToolBar does not show the exact PageRank probability value, but the PageRank range a particular site is in. Each range (from 0 to 10) is defined according to a logarithmic scale.

   Here is an example: each page has a real PageRank value known only to Google. To derive a displayed PageRank range for their ToolBar, they use a logarithmic scale as shown in this table
          Real PR                               ToolBar PR
          1-10                                            1
          10-100                                        2
          100-1000                                    3
          1000-10.000                               4
Etc.


   This shows that the PageRank ranges displayed on the Google ToolBar are not all equal. It is easy, for example, to increase PageRank from one to two, while it is much more difficult to increase it from six to seven.

   In practice, PageRank is mainly used for two purposes:

   1. Quick check of the sites popularity. PageRank does not give exact information about referring pages, but it allows you to quickly and easily get a feel for the sites popularity level and to follow trends that may result from your seo work. You can use the following “Rule of thumb” measures for English language sites: PR 4-5 is typical for most sites with average popularity. PR 6 indicates a very popular site while PR 7 is almost unreachable for a regular webmaster. You should congratulate yourself if you manage to achieve it. PR 8, 9, 10 can only be achieved by the sites of large companies such as Microsoft, Google, etc. PageRank is also useful when exchanging links and in similar situations. You can compare the quality of the pages offered in the exchange with pages from your own site to decide if the exchange should be accepted.

   2. Evaluation of the competitiveness level for a search query is a vital part of seo work. Although PageRank is not used directly in the ranking algorithms, it allows you to indirectly evaluate relative site competitiveness for a particular query. For example, if the search engine displays sites with PageRank 6-7 in the top search results, a site with PageRank 4 is not likely to get to the top of the results list using the same search query.

   It is important to recognize that the PageRank values displayed on the Google ToolBar are recalculated only occasionally (every few months) so the Google ToolBar displays somewhat outdated information. This means that the Google search engine tracks changes in inbound links much faster than these changes are reflected on the Google ToolBar.

3.7 Increasing link popularity
3.7.1 Submitting to general purpose directories
   On the Internet, many directories contain links to other network resources grouped by topics. The process of adding your site information to them is called submission.

   Such directories can be paid or free of charge, they may require a backlink from your site or they may have no such requirement. The number of visitors to these directories is not large so they will not send a significant number to your site. However, search engines count links from these directories and this may enhance your sites search result placement.

   Important! Only those directories that publish a direct link to your site are worthwhile from a seo point of view. Script driven directories are almost useless. This point deserves a more detailed explanation. There are two methods for publishing a link. A direct link is published as a standard HTML construction («A href=...», etc.). Alternatively, links can be published with the help of various scripts, redirects and so on. Search engines understand only those links that are specified directly in HTML code. That is why the seo value of a directory that does not publish a direct link to your site is close to zero.

   You should not submit your site to FFA (free-for-all) directories. Such directories automatically publish links related to any search topic and are ignored by search engines. The only thing an FFA directory entry will give you is an increase in spam sent to your published e-mail address. Actually, this is the main purpose of FFA directories.

   Be wary of promises from various programs and seo services that submit your resource to hundreds of thousands of search engines and directories. There are no more than a hundred or so genuinely useful directories on the Net – this is the number to take seriously and professional seo submission services work with this number of directories. If a seo service promises submissions to enormous numbers of resources, it simply means that the submission database mainly consists of FFA archives and other useless resources.

   Give preference to manual or semiautomatic seo submission; do not rely completely on automatic processes. Submitting sites under human control is generally much more efficient than fully automatic submission. The value of submitting a site to paid directories or publishing a backlink should be considered individually for each directory. In most cases, it does not make much sense, but there may be exceptions.

   Submitting sites to directories does not often result in a dramatic effect on site traffic, but it slightly increases the visibility of your site for search engines. This useful seo option is available to everyone and does not require a lot of time and expense, so do not overlook it when promoting your project.

3.7.2 DMOZ directory
    The DMOZ directory (www.dmoz.org) or the Open Directory Project is the largest directory on the Internet. There are many copies of the main DMOZ site and so, if you submit your site to the DMOZ directory, you will get a valuable link from the directory itself as well as dozens of additional links from related resources. This means that the DMOZ directory is of great value to a seo aware webmaster.

   It is not easy to get your site into the DMOZ directory; there is an element of luck involved. Your site may appear in the directory a few minutes after it has been submitted or it may take months to appear.

   If you submitted your site details correctly and in the appropriate category then it should eventually appear. If it does not appear after a reasonable time then you can try contacting the editor of your category with a question about your request (the DMOZ site gives you such opportunity). Of course, there are no guarantees, but it may help. DMOZ directory submissions are free of charge for all sites, including commercial ones.

   Here are my final recommendations regarding site submissions to DMOZ. Read all site requirements, descriptions, etc. to avoid violating the submission rules. Such a violation will most likely result in a refusal to consider your request. Please remember, presence in the DMOZ directory is desirable, but not obligatory. Do not despair if you fail to get into this directory. It is possible to reach top positions in search results without this directory – many sites do.

3.7.3 Link exchange
   The essence of link exchanges is that you use a special page to publish links to other sites and get similar backlinks from them. Search engines do not like link exchanges because, in many cases, they distort search results and do not provide anything useful to Internet users. However, it is still an effective way to increase link popularity if you observe several simple rules.

   - Exchange links with sites that are related by topic. Exchanging links with unrelated sites is ineffective and unpopular.

   - Before exchanging, make sure that your link will be published on a “good” page. This means that the page must have a reasonable PageRank (3-4 or higher is recommended), it must be available for indexing by search engines, the link must be direct, the total number of links on the page must not exceed 50, and so on.

   - Do not create large link directories on your site. The idea of such a directory seems attractive because it gives you an opportunity to exchange links with many sites on various topics. You will have a topic category for each listed site. However, when trying to optimize your site you are looking for link quality rather than quantity and there are some potential pitfalls. No seo aware webmaster will publish a quality link to you if he receives a worthless link from your directory “link farm” in return. Generally, the PageRank of pages from such directories leaves a lot to be desired. In addition, search engines do not like these directories at all. There have even been cases where sites were banned for using such directories.

   - Use a separate page on the site for link exchanges. It must have a reasonable PageRank and it must be indexed by search engines, etc. Do not publish more than 50 links on one page (otherwise search engines may fail to take some of the links into account). This will help you to find other seo aware partners for link exchanges.

   - Search engines try to track mutual links. That is why you should, if possible, publish backlinks on a domain/site other than the one you are trying to promote. The best variant is when you promote the resource site1.com and publish backlinks on the resource site2.com.

    - Exchange links with caution. Webmasters who are not quite honest will often remove your links from their resources after a while. Check your backlinks from time to time.

3.7.4 Press releases, news feeds, thematic resources
   This section is about site marketing rather than pure seo. There are many information resources and news feeds that publish press releases and news on various topics. Such sites can supply you with direct visitors and also increase your sites popularity. If you do not find it easy to create a press release or a piece of news, hire copywriters – they will help you find or create something newsworthy.

   Look for resources that deal with similar topics to your own site. You may find many Internet projects that not in direct competition with you, but which share the same topic as your site. Try to approach the site owners. It is quite possible that they will be glad to publish information about your project.

   One final tip for obtaining inbound links – try to create slight variations in the inbound link text. If all inbound links to your site have exactly the same link text and there are many of them, the search engines may flag it as a spam attempt and penalize your site.

4 Indexing a site


   Before a site appears in search results, a search engine must index it. An indexed site will have been visited and analyzed by a search robot with relevant information saved in the search engine database. If a page is present in the search engine index, it can be displayed in search results otherwise, the search engine cannot know anything about it and it cannot display information from the page..

   Most average sized sites (with dozens to hundreds of pages) are usually indexed correctly by search engines. However, you should remember the following points when constructing your site. There are two ways to allow a search engine to learn about a new site:

   - Submit the address of the site manually using a form associated with the search engine, if available. In this case, you are the one who informs the search engine about the new site and its address goes into the queue for indexing. Only the main page of the site needs to be added, the search robot will find the rest of pages by following links.

   - Let the search robot find the site on its own. If there is at least one inbound link to your resource from other indexed resources, the search robot will soon visit and index your site. In most cases, this method is recommended. Get some inbound links to your site and just wait until the robot visits it. This may actually be quicker than manually adding it to the submission queue. Indexing a site typically takes from a few days to two weeks depending on the search engine. The Google search engine is the quickest of the bunch.

   Try to make your site friendly to search robots by following these rules:

   - Try to make any page of your site reachable from the main page in not more than three mouse clicks. If the structure of the site does not allow you to do this, create a so-called site map that will allow this rule to be observed.

   - Do not make common mistakes. Session identifiers make indexing more difficult. If you use script navigation, make sure you duplicate these links with regular ones because search engines cannot read scripts (see more details about these and other mistakes in section 2.3).

   - Remember that search engines index no more than the first 100-200 KB of text on a page. Hence, the following rule – do not use pages with text larger than 100 KB if you want them to be indexed completely.

   You can manage the behavior of search robots using the file robots.txt. This file allows you to explicitly permit or forbid them to index particular pages on your site.

   The databases of search engines are constantly being updated; records in them may change, disappear and reappear. That is why the number of indexed pages on your site may sometimes vary. One of the most common reasons for a page to disappear from indexes is server unavailability. This means that the search robot could not access it at the time it was attempting to index the site. After the server is restarted, the site should eventually reappear in the index.

   You should note that the more inbound links your site has, the more quickly it gets re-indexed. You can track the process of indexing your site by analyzing server log files where all visits of search robots are logged. We will give details of seo software that allows you to track such visits in a later section.

5 Choosing keywords


5.1 Initially choosing keywords
   Choosing keywords should be your first step when constructing a site. You should have the keyword list available to incorporate into your site text before you start composing it. To define your site keywords, you should use seo services offered by search engines in the first instance. Sites such as www.wordtracker.com and inventory.overture.com are good starting places for English language sites. Note that the data they provide may sometimes differ significantly from what keywords are actually the best for your site. You should also note that the Google search engine does not give information about frequency of search queries.

   After you have defined your approximate list of initial keywords, you can analyze your competitor’s sites and try to find out what keywords they are using. You may discover some further relevant keywords that are suitable for your own site.

5.2 Frequent and rare keywords
   There are two distinct strategies – optimize for a small number of highly popular keywords or optimize for a large number of less popular words. In practice, both strategies are often combined.

   The disadvantage of keywords that attract frequent queries is that the competition rate is high for them. It is often not possible for a new site to get anywhere near the top of search result listings for these queries.

   For keywords associated with rare queries, it is often sufficient just to mention the necessary word combination on a web page or to perform minimum text optimization. Under certain circumstances, rare queries can supply quite a large amount of search traffic.

   The aim of most commercial sites is to sell some product or service or to make money in some way from their visitors. This should be kept in mind during your seo (search engine optimization) work and keyword selection. If you are optimizing a commercial site then you should try to attract targeted visitors (those who are ready to pay for the offered product or service) to your site rather than concentrating on sheer numbers of visitors.

   Example. The query “monitor” is much more popular and competitive than the query “monitor Samsung 710N” (the exact name of the model). However, the second query is much more valuable for a seller of monitors. It is also easier to get traffic from it because its competition rate is low; there are not many other sites owned by sellers of Samsung 710N monitors. This example highlights another possible difference between frequent and rare search queries that should be taken into account – rare search queries may provide you with less visitors overall, but more targeted visitors.

5.3 Evaluating the competition rates of search queries
   When you have finalized your keywords list, you should identify the core keywords for which you will optimize your pages. A suggested technique for this follows.

   Rare queries are discarded at once (for the time being). In the previous section, we described the usefulness of such rare queries but they do not require special optimization. They are likely to occur naturally in your website text.

   As a rule, the competition rate is very high for the most popular phrases. This is why you need to get a realistic idea of the competitiveness of your site. To evaluate the competition rate you should estimate a number of parameters for the first 10 sites displayed in search results:
   - The average PageRank of the pages in the search results.
   - The average number of links to these sites. Check this using a variety of search engines.
   Additional parameters:
   - The number of pages on the Internet that contain the particular search term, the total number of search results for that search term.
   - The number of pages on the Internet that contain exact matches to the keyword phrase. The search for the phrase is bracketed by quotation marks to obtain this number.

   These additional parameters allow you to indirectly evaluate how difficult it will be to get your site near the top of the list for this particular phrase. As well as the parameters described, you can also check the number of sites present in your search results in the main directories, such as DMOZ and Yahoo.

   The analysis of the parameters mentioned above and their comparison with those of your own site will allow you to predict with reasonable certainty the chances of getting your site to the top of the list for a particular phrase.

   Having evaluated the competition rate for all of your keyword phrases, you can now select a number of moderately popular key phrases with an acceptable competition rate, which you can use to promote and optimize your site.

5.4 Refining your keyword phrases
   As mentioned above, search engine services often give inaccurate keyword information. This means that it is unusual to obtain an optimum set of site keywords at your first attempt. After your site is up and running and you have carried out some initial promotion, you can obtain additional keyword statistics, which will facilitate some fine-tuning. For example, you will be able to obtain the search results rating of your site for particular phrases and you will also have the number of visits to your site for these phrases.

   With this information, you can clearly define the good and bad keyword phrases. Often there is no need to wait until your site gets near the top of all search engines for the phrases you are evaluating – one or two search engines are enough.

   Example. Suppose your site occupies first place in the Yahoo search engine for a particular phrase. At the same time, this site is not yet listed in MSN, or Google search results for this phrase. However, if you know the percentage of visits to your site from various search engines (for instance, Google – 70%, Yahoo – 20%, MSN search – 10%), you can predict the approximate amount of traffic for this phrase from these other searches engines and decide whether it is suitable.

   As well as detecting bad phrases, you may find some new good ones. For example, you may see that a keyword phrase you did not optimize your site for brings useful traffic despite the fact that your site is on the second or third page in search results for this phrase.

   Using these methods, you will arrive at a new refined set of keyword phrases. You should now start reconstructing your site: Change the text to include more of the good phrases, create new pages for new phrases, etc.

   You can repeat this seo exercise several times and, after a while, you will have an optimum set of key phrases for your site and considerably increased search traffic.
   Here are some more tips. According to statistics, the main page takes up to 30%-50% of all search traffic. It has the highest visibility in search engines and it has the largest number of inbound links. That is why you should optimize the main page of your site to match the most popular and competitive queries. Each site page should be optimized for one or two main word combinations and, possibly for a number of rare queries. This will increase the chances for the page get to the top of search engine lists for particular phrases.

6 Miscellaneous information on search engines


6.1 Google SandBox
   At the beginning of 2004, a new and mysterious term appeared among seo specialists – Google SandBox. This is the name of a new Google spam filter that excludes new sites from search results. The work of the SandBox filter results in new sites being absent from search results for virtually any phrase. This even happens with sites that have high-quality unique content and which are promoted using legitimate techniques.

   The SandBox is currently applied only to the English segment of the Internet; sites in other languages are not yet affected by this filter. However, this filter may expand its influence. It is assumed that the aim of the SandBox filter is to exclude spam sites – indeed, no search spammer will be able to wait for months until he gets the necessary results. However, many perfectly valid new sites suffer the consequences. So far, there is no precise information as to what the SandBox filter actually is. Here are some assumptions based on practical seo experience:

   - SandBox is a filter that is applied to new sites. A new site is put in the sandbox and is kept there for some time until the search engine starts treating it as a normal site.

   - SandBox is a filter applied to new inbound links to new sites. There is a fundamental difference between this and the previous assumption: the filter is not based on the age of the site, but on the age of inbound links to the site. In other words, Google treats the site normally but it refuses to acknowledge any inbound links to it unless they have existed for several months. Since such inbound links are one of the main ranking factors, ignoring inbound links is equivalent to the site being absent from search results. It is difficult to say which of these assumptions is true, it is quite possible that they are both true.

   - The site may be kept in the sandbox from 3 months to a year or more. It has also been noticed that sites are released from the sandbox in batches. This means that the time sites are kept in the sandbox is not calculated individually for each site, but for groups of sites. All sites created within a certain time period are put into the same group and they are eventually all released at the same time. Thus, individual sites in a group can spend different times in the sandbox depending where they were in the group capture-release cycle.

   Typical indications that your site is in the sandbox include:

   - Your site is normally indexed by Google and the search robot regularly visits it.
   - Your site has a PageRank; the search engine knows about and correctly displays inbound links to your site.
   - A search by site address (www.site.com) displays correct results, with the correct title, snippet (resource description), etc.
   - Your site is found by rare and unique word combinations present in the text of its pages.
   - Your site is not displayed in the first thousand results for any other queries, even for those for which it was initially created. Sometimes, there are exceptions and the site appears among 500-600 positions for some queries. This does not change the sandbox situation, of course.

   There no practical ways to bypass the Sandbox filter. There have been some suggestions about how it may be done, but they are no more than suggestions and are of little use to a regular webmaster. The best course of action is to continue seo work on the site content and structure and wait patiently until the sandbox is disabled after which you can expect a dramatic increase in ratings, up to 400-500 positions.

6.2 Google LocalRank
   On February 25, 2003, the Google Company patented a new algorithm for ranking pages called LocalRank. It is based on the idea that pages should be ranked not by their global link citations, but by how they are cited among pages that deal with topics related to the particular query. The LocalRank algorithm is not used in practice (at least, not in the form it is described in the patent). However, the patent contains several interesting innovations we think any seo specialist should know about. Nearly all search engines already take into account the topics to which referring pages are devoted. It seems that rather different algorithms are used for the LocalRank algorithm and studying the patent will allow us to learn general ideas about how it may be implemented.

   While reading this section, please bear in mind that it contains theoretical information rather than practical guidelines.

   The following three items comprise the main idea of the LocalRank algorithm:

   1. An algorithm is used to select a certain number of documents relevant to the search query (let it be N). These documents are initially sorted by some criteria (this may be PageRank, relevance or a group of other criteria). Let us call the numeric value of this criterion OldScore.

   2. Each of the N N selected pages goes through a new ranking procedure and it gets a new rank. Let us call it LocalScore.

   3. The OldScore and LocalScore values for each page are multiplied, to yield a new value – NewScore. The pages are finally ranked based on NewScore.

   The key procedure in this algorithm is the new ranking procedure, which gives each page a new LocalScore rank. Let us examine this new procedure in more detail:

   0. An initial ranking algorithm is used to select N pages relevant to the search query. Each of the N pages is allocated an OldScore value by this algorithm. The new ranking algorithm only needs to work on these N selected pages. .

   1. While calculating LocalScore for each page, the system selects those pages from N that have inbound links to this page. Let this number be M. At the same time, any other pages from the same host (as determined by IP address) and pages that are mirrors of the given page will be excluded from M.

   2. The set M is divided into subsets Li. These subsets contain pages grouped according to the following criteria:
   - Belonging to one (or similar) hosts. Thus, pages whose first three octets in their IP addresses are the same will get into one group. This means that pages whose IP addresses belong to the range xxx.xxx.xxx.0 to xxx.xxx.xxx.255 will be considered as belonging to one group.
   - Pages that have the same or similar content (mirrors)
   - Pages on the same site (domain).

   3. Each page in each Li subset has rank OldScore. One page with the largest OldScore rank is taken from each subset, the rest of pages are excluded from the analysis. Thus, we get some subset of pages K referring to this page.

   4. Pages in the subset K are sorted by the OldScore parameter, then only the first k pages (k is some predefined number) are left in the subset K. The rest of the pages are excluded from the analysis.

   5. LocalScore is calculated in this step. The OldScore parameters are combined together for the rest of k pages. This can be shown with the help of the following formula:
formula.JPG - 2826 Bytes
   Here m is some predefined parameter that may vary from one to three. Unfortunately, the patent for the algorithm in question does not describe this parameter in detail.

   After LocalScore is calculated for each page from the set N, NewScore values are calculated and pages are re-sorted according to the new criteria. The following formula is used to calculate NewScore:

   NewScore(i)= (a+LocalScore(i)/MaxLS)*(b+OldScore(i)/MaxOS)

   i is the page for which the new rank is calculated.

   a and b – are numeric constants (there is no more detailed information in the patent about these parameters).

   MaxLS – is the maximum LocalScore among those calculated.

   MaxOS – is the maximum value among OldScore values.

   Now let us put the math aside and explain these steps in plain words.

   In step 0) pages relevant to the query are selected. Algorithms that do not take into account the link text are used for this. For example, relevance and overall link popularity are used. We now have a set of OldScore values. OldScore is the rating of each page based on relevance, overall link popularity and other factors.

   In step 1) pages with inbound links to the page of interest are selected from the group obtained in step 0). The group is whittled down by removing mirror and other sites in steps 2), 3) and 4) so that we are left with a set of genuinely unique sites that all share a common theme with the page that is under analysis. By analyzing inbound links from pages in this group (ignoring all other pages on the Internet), we get the local (thematic) link popularity.

   LocalScore values are then calculated in step 5). LocalScore is the rating of a page among the set of pages that are related by topic. Finally, pages are rated and ranked using a combination of LocalScore and OldScore.

6.3 Seo tips, assumptions, observations
   This section provides information based on an analysis of various seo articles, communication between optimization specialists, practical experience and so on. It is a collection of interesting and useful tips ideas and suppositions. Do not regard this section as written in stone, but rather as a collection of information and suggestions for your consideration.

   - Outbound links. Publish links to authoritative resources in your subject field using the necessary keywords. Search engines place a high value on links to other resources based on the same topic.

   - Outbound links. Do not publish links to FFA sites and other sites excluded from the indexes of search engines. Doing so may lower the rating of your own site.

   - Outbound links. A page should not contain more than 50-100 outbound links. More links will not harm your site rating but links beyond that number will not be recognized by search engines.

   - Inbound site-wide links. These are links published on every page of the site. It is believed that search engines do not approve of such links and do not consider them while ranking pages. Another opinion is that this is true only for large sites with thousands of pages.

   - The ideal keyword density is a frequent seo discussion topic. The real answer is that there is no ideal keyword density. It is different for each query and search engines calculate it dynamically for each search query. Our advice is to analyze the first few sites in search results for a particular query. This will allow you to evaluate the approximate optimum density for specific queries.

   - Site age. Search engines prefer old sites because they are more stable.

   - Site updates. Search engines prefer sites that are constantly developing. Developing sites are those in which new information and new pages periodically appear.

   - Domain zone. Search engines prefer sites that are located in the zones .edu, .mil, .gov, etc. Only the corresponding organizations can register such domains so these domains are more trustworthy.

   - Search engines track the percent of visitors that immediately return to searching after they visit a site via a search result link. A large number of immediate returns means that the content is probably not related to the corresponding topic and the ranking of such a page gets lower.

   - Search engines track how often a link is selected in search results. If some link is only occasionally selected, it means that the page is of little interest and the rating of such a page gets lower

   - Use synonyms and derived word forms of keywords, search engines will appreciate that (keyword stemming).

;    - Search engines consider a very rapid increase in inbound links as artificial promotion and this results in lowering of the rating. This is a controversial topic because this method could be used to lower the rating of one's competitors.

   - Google does not take into account inbound links if they are on the same (or similar) hosts. This is detected using host IP addresses. Pages whose IP addresses are within the range of xxx.xxx.xxx.0 to xxx.xxx.xxx.255. are regarded as being on the same host. This opinion is most likely to be rooted in the fact that Google have expressed this idea in their patents. However, Google employees claim that no limitations of IP addresses are imposed on inbound links and there are no reasons not to believe them.

   - Search engines check information about the owners of domains. Inbound links originating from a variety of sites all belonging to one owner are regarded as less important than normal links. This information is presented in a patent.

   - Search engines prefer sites with longer term domain registrations.

6.4 Creating correct content
   The content of a site plays an important role in site promotion for many reasons. We will describe some of them in this section. We will also give you some advice on how to populate your site with good content.

   - Content uniqueness. Search engines value new information that has not been published before. That is why you should compose own site text and not plagiarize excessively. A site based on materials taken from other sites is much less likely to get to the top in search engines. As a rule, original source material is always higher in search results.

   - While creating a site, remember that it is primarily created for human visitors, not search engines. Getting visitors to visit your site is only the first step and it is the easiest one. The truly difficult task is to make them stay on the site and convert them into purchasers. You can only do this by using good content that is interesting to real people.

   - Try to update information on the site and add new pages on a regular basis. Search engines value sites that are constantly developing. Also, the more useful text your site contains, the more visitors it attracts. Write articles on the topic of your site, publish visitors' opinions, create a forum for discussing your project. A forum is only useful if the number of visitors is sufficient for it to be active. Interesting and attractive content guarantees that the site will attract interested visitors.

   - A site created for people rather than search engines has a better chance of getting into important directories such as DMOZ and others.

   - An interesting site on a particular topic has much better chances to get links, comments, reviews, etc. from other sites on this topic. Such reviews can give you a good flow of visitors while inbound links from such resources will be highly valued by search engines.

   - As final tip…there is an old German proverb: "A shoemaker sticks to his last" which means, "Do what you can do best.” If you can write breathtaking and creative textual prose for your website then that is great. However, most of us have no special talent for writing attractive text and we should rely on professionals such as journalists and technical writers. Of course, this is an extra expense, but it is justified in the long term.

6.5 Selecting a domain and hosting
   Currently, anyone can create a page on the Internet without incurring any expense. Also, there are companies providing free hosting services that will publish your page in return for their entitlement to display advertising on it. Many Internet service providers will also allow you to publish your page on their servers if you are their client. However, all these variations have serious drawbacks that you should seriously consider if you are creating a commercial project.

   First, and most importantly, you should obtain your own domain for the following reasons:

   - A project that does not have its own domain is regarded as a transient project. Indeed, why should we trust a resource if its owners are not even prepared to invest in the tiny sum required to create some sort of minimum corporate image? It is possible to publish free materials using resources based on free or ISP-based hosting, but any attempt to create a commercial project without your own domain is doomed to failure.

   - Your own domain allows you to choose your hosting provider. If necessary, you can move your site to another hosting provider at any time.

    Here are some useful tips for choosing a domain name.

   - Try to make it easy to remember and make sure there is only one way to pronounce and spell it.

   - Domains with the extension .com are the best choice to promote international projects in English. Domains from the zones .net, .org, .biz, etc., are available but less preferable.

   - If you want to promote a site with a national flavor, use a domain from the corresponding national zone. Use .de – for German sites, .it – for Italian sites, etc.

   - In the case of sites containing two or more languages, you should assign a separate domain to each language. National search engines are more likely to appreciate such an approach than subsections for various languages located on one site.

   A domain costs $10-20 a year, depending on the particular registration service and zone.

   You should take the following factors into consideration when choosing a hosting provider:

   - Access bandwidth.
   - Server uptime.
   - The cost of traffic per gigabyte and the amount of prepaid traffic.
   - The site is best located in the same geographical region as most of your expected visitors.

   The cost of hosting services for small projects is around $5-10 per month.

   Avoid “free” offers while choosing a domain and a hosting provider. Hosting providers sometimes offer free domains to their clients. Such domains are often registered not to you, but to the hosting company. The hosting provider will be the owner of the domain. This means that you will not be able to change the hosting service of your project, or you could even be forced to buy out your own domain at a premium price. Also, you should not register your domains via your hosting company. This may make moving your site to another hosting company more difficult even though you are the owner of your domain.

6.6 Changing the site address
   You may need to change the address of your project. Maybe the resource was started on a free hosting service and has developed into a more commercial project that should have its own domain. Or maybe the owner has simply found a better name for the project. In any case, moving to a new address can be problematic and it is a difficult and unpleasant task to move a project to a new address. For starters, you will have to start promoting the new address almost from scratch. However, if the move is inevitable, you may as well make the change as useful as possible.

   Our advice is to create your new site at the new location with new and unique content. Place highly visible links to the new resource on the old site to allow visitors to easily navigate to your new site. Do not completely delete the old site and its contents.

   This approach will allow you to get visitors from search engines to both the old site and the new one. At the same time, you get an opportunity to cover additional topics and keywords, which may be more difficult within one resource.

7 SEO software review


   In previous chapters, we explained how to create your own site and what methods are available to promote it. This last section is devoted to seo software tools that can automate much of the seo work on your site and can achieve even better results. We will discuss the Seo Administrator seo software suite that you can download from our site (www.seoadministrator.com).

7.1 Ranking Monitor
   Any seo optimization specialist is faced with the regular task of checking the positions of his sites in the search engines. You could check these positions manually, but if you have several dozen keywords and 5-7 search engines to monitor, the process becomes a real chore.

   The Ranking Monitor module will do everything automatically. You are able to see information on your site ratings for any keywords and in a variety of search engines. You will also see the dynamics and history of your site positions as well as upward and downward trends in your site position for your specified keywords. The same information is also displayed in a visual form.

7.2 Link Popularity Checker
   This program will automatically poll all available search engines and create a complete duplicate-free list of inbound links to your resource. For each link, you will see important parameters such as the link text and PageRank of the referring page. If you have studied this article, you will know how important these parameters are. As well as viewing the overall list of inbound links, you can track how the inbound links change over time.

7.3 Site Indexation Tool
   This useful tool will show you all pages indexed by a particular search engine. It is a must-have tool for anybody who is creating a new web resource. The PageRank value will be displayed for each indexed page.

7.4 Log Analyzer
   All information about your visitors is stored in the log files of your server. The log analyzer module will present this information in convenient and visual reports. Displayed information includes:
   - Originating sites
   - Keywords used,
   - What country they are from
   - Much more…

7.5 Page Rank Analyzer
   This utility collects a huge amount of competitive information on the list of sites that you specify. For each site it automatically determines parameters such as Google PageRank, the number of inbound links and the presence of each site in the DMOZ and Yahoo directories. It is an ideal tool for analyzing the competition rate of a particular query.

7.6 Keyword Suggestion Tool
   This tool gathers relevant keywords for your site and displays their popularity (the number of queries per month). It also estimates the competition rate of a specified keyword phrase.

7.7 HTML Analyzer
   This application analyzes the HTML code of a page. It estimates the weight and density of keywords and creates a report on the correct optimization of the site text. It is useful during the creation your own site and is also a great tool for analyzing your competitors' sites. It allows you to analyze both local HTML pages and online projects.

Instead of a conclusion: promoting your site step by step


   In this section, I will explain how I use seo in promoting my own sites. It is a kind of systematic summary where I briefly recap the previous sections. Naturally, I use Seo Administrator seo software extensively in my work and so I will show how I use it in this example.

   To be able to start working with a site, you have to possess some basic seo knowledge. This can be acquired quite quickly. The information presented in this document is perfectly adequate and I must stress that you do not have to be an optimization guru to achieve results. Once you have this basic knowledge you can then start work, experimenting, getting sites to the top of the search listings and so on. That is where seo software tools are useful.

   1. Firstly, we create an approximate list of keywords and check their competition rate. We then evaluate our chances against the competition and select words that are popular enough and have average competition rate. Keywords are selected using the keyword suggestion tool. This is also used to perform a rough check of their competition rate. We use the PageRank Analyzer module to perform a detailed analysis of search results for the most interesting queries and then make our final decision about what keywords to use.

   2. Next, we start composing text for our site. I write part of it on my own, but I entrust the most important parts to specialists in technical writing. Actually, I think the quality and attractiveness of the text is the most important attribute of a page. If the textual content is good, it will be easier to get inbound links and visitors.

   3. In this step, we start using the HTML Analyzer module to create the necessary keyword density. Each page is optimized for its own keyword phrase.

   4. We submit the site to various directories. There are plenty of services to take care of that chore for us. In addition, Seo Administrator will soon have a feature to automate the task.

   5. After these initial steps are completed, we wait and check search engine indexation to make sure that various search engines are processing the site.

   6. In this step, we can begin to check the positions of the site for our keywords. These positions are not likely to be good at this early stage, but they will give us some useful information to begin fine-tuning seo work.

   7. We use the Link Popularity Checker module to track and work on increasing the link popularity.

.    8. We use the Log Analyzer module to analyze the number of visitors and work on increasing it. We also periodically repeat steps 6) - 8).