Minggu, 25 Januari 2009

Global Navigation Satellite System

Global Navigation Satellite System (GNSS) is the standard generic term for satellite navigation systems that provide autonomous geo-spatial positioning with global coverage.

A GNSS allows small electronic receivers to determine their location (longitude, latitude, and altitude) to within a few meters using time signals transmitted along a line of sight by radio from satellites. Receivers on the ground with a fixed position can also be used to calculate the precise time as a reference for scientific experiments.

As of 2007, the United States NAVSTAR Global Positioning System (GPS) is the only fully operational GNSS. The Russian GLONASS is a GNSS in the process of being restored to full operation.

Global Navigation Satellite System (GNSS) is the standard generic term for satellite navigation systems that provide autonomous geo-spatial positioning with global coverage.

A GNSS allows small electronic receivers to determine their location (longitude, latitude, and altitude) to within a few meters using time signals transmitted along a line of sight by radio from satellites. Receivers on the ground with a fixed position can also be used to calculate the precise time as a reference for scientific experiments.

As of 2007, the United States NAVSTAR Global Positioning System (GPS) is the only fully operational GNSS. The Russian GLONASS is a GNSS in the process of being restored to full operation.

The European Union's Galileo positioning system is a GNSS in initial deployment phase, scheduled to be operational in 2013.
China has indicated it may expand its regional Beidou navigation system into a global system. India's IRNSS, a regional system is intended to be completed and operational by 2012.

GNSS that provide enhanced accuracy and integrity monitoring usable for civil navigation are classified as follows:

  1. GNSS-1 is the first generation system and is the combination of existing satellite navigation systems (GPS and GLONASS), with Satellite Based Augmentation Systems (SBAS) or Ground Based Augmentation Systems (GBAS). In the United States, the satellite based component is the Wide Area Augmentation System (WAAS), in Europe it is the European Geostationary Navigation Overlay Service (EGNOS), and in Japan it is the Multi-Functional Satellite Augmentation System (MSAS). Ground based augmentation is provided by systems like the Local Area Augmentation System (LAAS).
  2. GNSS-2 is the second generation of systems that independently provides a full civilian satellite navigation system, exemplified by the European Galileo positioning system. These systems will provide the accuracy and integrity monitoring necessary for civil navigation. This system consists of L1 and L2 frequencies for civil use and L5 for system integrity. Development is also in progress to provide GPS with civil use L2 and L5 frequencies, making it a GNSS-2 system.
  3. Core Satellite navigation systems, currently GPS, Galileo and GLONASS.
  4. Global Satellite Based Augmentation Systems (SBAS) such as Omnistar and StarFire.
  5. Regional SBAS including WAAS(US), EGNOS (EU), MSAS (Japan) and GAGAN (India).
  6. Regional Satellite Navigation Systems such a QZSS (Japan), IRNSS (India) and Beidou (China).
  7. Continental scale Ground Based Augmentation Systems (GBAS) for example the Australian GRAS and the US Department of Transportation National Differential GPS (DGPS) service.
  8. Regional scale GBAS such as CORS networks.
  9. Local GBAS typified by a single GPS reference station operating Real Time Kinematic (RTK) corrections.


HISTORY AND THEORY
Early predecessors were the ground based DECCA, LORAN and Omega systems, which used terrestrial longwave radio transmitters instead of satellites.

These positioning systems broadcast a radio pulse from a known "master" location, followed by repeated pulses from a number of "slave" stations.

The delay between the reception and sending of the signal at the slaves was carefully controlled, allowing the receivers to compare the delay between reception and the delay between sending. From this the distance to each of the slaves could be determined, providing a fix.

The first satellite navigation system was Transit, a system deployed by the US military in the 1960s. Transit's operation was based on the Doppler effect: the satellites traveled on well-known paths and broadcast their signals on a well known frequency.

The received frequency will differ slightly from the broadcast frequency because of the movement of the satellite with respect to the receiver. By monitoring this frequency shift over a short time interval, the receiver can determine its location to one side or the other of the satellite, and several such measurements combined with a precise knowledge of the satellite's orbit can fix a particular position.

Part of an orbiting satellite's broadcast included its precise orbital data. In order to ensure accuracy, the US Naval Observatory (USNO) continuously observed the precise orbits of these satellites. As a satellite's orbit deviated, the USNO would send the updated information to the satellite. Subsequent broadcasts from an updated satellite would contain the most recent accurate information about its orbit.

Modern systems are more direct. The satellite broadcasts a signal that contains the position of the satellite and the precise time the signal was transmitted. The position of the satellite is transmitted in a data message that is superimposed on a code that serves as a timing reference.
The satellite uses an atomic clock to maintain synchronization of all the satellites in the constellation. The receiver compares the time of broadcast encoded in the transmission with the time of reception measured by an internal clock, thereby measuring the time-of-flight to the satellite.

Several such measurements can be made at the same time to different satellites, allowing a continual fix to be generated in real time. Each distance measurement, regardless of the system being used, places the receiver on a spherical shell at the measured distance from the broadcaster.
By taking several such measurements and then looking for a point where they meet, a fix is generated. However, in the case of fast-moving receivers, the position of the signal moves as signals are received from several satellites.

In addition, the radio signals slow slightly as they pass through the ionosphere, and this slowing varies with the receiver's angle to the satellite, because that changes the distance through the ionosphere. The basic computation thus attempts to find the shortest directed line tangent to four oblate spherical shells centered on four satellites.

Satellite navigation receivers reduce errors by using combinations of signals from multiple satellites and multiple correlators, and then using techniques such as Kalman filtering to combine the noisy, partial, and constantly changing data into a single estimate for position, time, and velocity.


CIVIL AND MILITARY USES
The original motivation for satellite navigation was for military applications. Satellite navigation allows for hitherto impossible precision in the delivery of weapons to targets, greatly increasing their lethality whilst reducing inadvertent casualties from mis-directed weapons. Satellite navigation also allows forces to be directed and to locate themselves more easily, reducing the fog of war.

In these ways, satellite navigation can be regarded as a force multiplier. In particular, the ability to reduce unintended casualties has particular advantages for wars where public relations is an important aspect of warfare. For these reasons, a satellite navigation system is an essential asset for any aspiring military power.

GNSS systems have a wide variety of uses:
  • Navigation, ranging from personal hand-held devices for trekking, to devices fitted to cars, trucks, ships and aircraft
  • Time transfer and synchronization
  • Location-based services such as enhanced 911
  • Surveying
  • Entering data into a geographic information system
  • Search and rescue
  • Geophysical Sciences
  • Tracking devices used in wildlife management
  • Asset Tracking, as in trucking fleet management
  • Road Pricing
  • Location-based media
Note that the ability to supply satellite navigation signals is also the ability to deny their availability. The operator of a satellite navigation system potentially has the ability to degrade or eliminate satellite navigation services over any territory it desires.


CURRENT GLOBAL NAVIGATION SYSTEMS
"Gps"
The United States' Global Positioning System (GPS), which as of 2007 is the only fully functional, fully available global navigation satellite system.
It consists of up to 32 medium Earth orbit satellites in six different orbital planes, with the exact number of satellites varying as older satellites are retired and replaced.
Operational since 1978 and globally available since 1994, GPS is currently the world's most utilized satellite navigation system.

“Glonass”
The formerly Soviet, and now Russian, GLObal'naya NAvigatsionnaya Sputnikovaya Sistema, or GLONASS, was a fully functional navigation constellation but since the collapse of the Soviet Union has fallen into disrepair, leading to gaps in coverage and only partial availability.
The Russian Federation has pledged to restore it to full global availability by 2010 with the help of India, who is participating in the restoration project.


PROPOSED NAVIGATION SYSTEMS IRNSS
The Indian Regional Navigational Satellite System (IRNSS) is an autonomous regional satellite navigation system being developed by Indian Space Research Organisation which would be under the total control of Indian government.

The government approved the project in May 2006, with the intention of the system to be completed and implemented by 2012. It will consist of a constellation of 7 navigational satellites by 2012. All the 7 satellites will placed in the Geostationary orbit (GEO) to have a larger signal footprint and lower number of satellites to map the region.

It is intended to provide an absolute position accuracy of better than 20 meters throughout India and within a region extending approximately 2,000 km around it. A goal of complete Indian control has been stated, with the space segment, ground segment and user receivers all being built in India.

“Compass”

China has indicated they intend to expand their regional navigation system, called Beidou or Big Dipper, into a global navigation system; a program that has been called Compass in China's official news agency Xinhua. The Compass system is proposed to utilize 30 medium Earth orbit satellites and five geostationary satellites.
Having announced they are willing to cooperate with other countries in Compass's creation, it is unclear how this proposed program impacts China's commitment to the international Galileo position system.

"Doris"
Doppler Orbitography and Radio-positioning Integrated by Satellite (DORIS) is a French precision navigation system.

“Galileo”
The European Union and European Space Agency agreed on March 2002 to introduce their own alternative to GPS, called the Galileo positioning system. At a cost of about GBP £2.4 billion, the system is scheduled to be working from 2012.
The first experimental satellite was launched on 28 December 2005. Galileo is expected to be compatible with the modernized GPS system. The receivers will be able to combine the signals from both Galileo and GPS satellites to greatly increase the accuracy.

“QZSS”
The Quasi-Zenith Satellite System (QZSS), is a proposed three-satellite regional time transfer system and enhancement for GPS covering Japan. The first satellite is scheduled to be launched in 2008.


GNSS AUGMENTATION
GNSS Augmentation involves using external information, often integrated into the calculation process, to improve the accuracy, availability, or reliability of the satellite navigation signal.
There are many such systems in place and they are generally named or described based on how the GNSS sensor receives the information.

Some systems transmit additional information about sources of error (such as clock drift, ephemeris, or ionospheric delay), others provide direct measurements of how much the signal was off in the past, while a third group provide additional navigational or vehicle information to be integrated in the calculation process.

Examples of augmentation systems include the Wide Area Augmentation System, the European Geostationary Navigation Overlay Service, the Multi-functional Satellite Augmentation System, Differential GPS, and Inertial Navigation Systems.


LOW EARTH ORBIT SATELLITE PHONE NETWORKS
The two current operational low Earth orbit satellite phone networks are able to track transceiver units with accuracy of a few kilometers using Doppler shift calculations from the satellite. The coordinates are sent back to the transceiver unit where they can be read using AT commands or a graphical user interface.
This can also be used by the gateway to enforce restrictions on geographically bound calling plans.



source: en.wikipedia.com



The European Union's Galileo positioning system is a GNSS in initial deployment phase, scheduled to be operational in 2013.
China has indicated it may expand its regional Beidou navigation system into a global system. India's IRNSS, a regional system is intended to be completed and operational by 2012.


GNSS CLASSIFICATION
GNSS that provide enhanced accuracy and integrity monitoring usable for civil navigation are classified as follows:
  1. GNSS-1 is the first generation system and is the combination of existing satellite navigation systems (GPS and GLONASS), with Satellite Based Augmentation Systems (SBAS) or Ground Based Augmentation Systems (GBAS). In the United States, the satellite based component is the Wide Area Augmentation System (WAAS), in Europe it is the European Geostationary Navigation Overlay Service (EGNOS), and in Japan it is the Multi-Functional Satellite Augmentation System (MSAS). Ground based augmentation is provided by systems like the Local Area Augmentation System (LAAS).
  2. GNSS-2 is the second generation of systems that independently provides a full civilian satellite navigation system, exemplified by the European Galileo positioning system. These systems will provide the accuracy and integrity monitoring necessary for civil navigation. This system consists of L1 and L2 frequencies for civil use and L5 for system integrity. Development is also in progress to provide GPS with civil use L2 and L5 frequencies, making it a GNSS-2 system.
  3. Core Satellite navigation systems, currently GPS, Galileo and GLONASS.
  4. Global Satellite Based Augmentation Systems (SBAS) such as Omnistar and StarFire.
  5. Regional SBAS including WAAS(US), EGNOS (EU), MSAS (Japan) and GAGAN (India).
  6. Regional Satellite Navigation Systems such a QZSS (Japan), IRNSS (India) and Beidou (China).
  7. Continental scale Ground Based Augmentation Systems (GBAS) for example the Australian GRAS and the US Department of Transportation National Differential GPS (DGPS) service.
  8. Regional scale GBAS such as CORS networks.
  9. Local GBAS typified by a single GPS reference station operating Real Time Kinematic (RTK) corrections.


HISTORY AND THEORY
Early predecessors were the ground based DECCA, LORAN and Omega systems, which used terrestrial longwave radio transmitters instead of satellites.

These positioning systems broadcast a radio pulse from a known "master" location, followed by repeated pulses from a number of "slave" stations.

The delay between the reception and sending of the signal at the slaves was carefully controlled, allowing the receivers to compare the delay between reception and the delay between sending. From this the distance to each of the slaves could be determined, providing a fix.

The first satellite navigation system was Transit, a system deployed by the US military in the 1960s. Transit's operation was based on the Doppler effect: the satellites traveled on well-known paths and broadcast their signals on a well known frequency.

The received frequency will differ slightly from the broadcast frequency because of the movement of the satellite with respect to the receiver. By monitoring this frequency shift over a short time interval, the receiver can determine its location to one side or the other of the satellite, and several such measurements combined with a precise knowledge of the satellite's orbit can fix a particular position.

Part of an orbiting satellite's broadcast included its precise orbital data. In order to ensure accuracy, the US Naval Observatory (USNO) continuously observed the precise orbits of these satellites. As a satellite's orbit deviated, the USNO would send the updated information to the satellite. Subsequent broadcasts from an updated satellite would contain the most recent accurate information about its orbit.

Modern systems are more direct. The satellite broadcasts a signal that contains the position of the satellite and the precise time the signal was transmitted. The position of the satellite is transmitted in a data message that is superimposed on a code that serves as a timing reference.
The satellite uses an atomic clock to maintain synchronization of all the satellites in the constellation. The receiver compares the time of broadcast encoded in the transmission with the time of reception measured by an internal clock, thereby measuring the time-of-flight to the satellite.

Several such measurements can be made at the same time to different satellites, allowing a continual fix to be generated in real time. Each distance measurement, regardless of the system being used, places the receiver on a spherical shell at the measured distance from the broadcaster.
By taking several such measurements and then looking for a point where they meet, a fix is generated. However, in the case of fast-moving receivers, the position of the signal moves as signals are received from several satellites.

In addition, the radio signals slow slightly as they pass through the ionosphere, and this slowing varies with the receiver's angle to the satellite, because that changes the distance through the ionosphere. The basic computation thus attempts to find the shortest directed line tangent to four oblate spherical shells centered on four satellites.

Satellite navigation receivers reduce errors by using combinations of signals from multiple satellites and multiple correlators, and then using techniques such as Kalman filtering to combine the noisy, partial, and constantly changing data into a single estimate for position, time, and velocity.


CIVIL AND MILITARY USES
The original motivation for satellite navigation was for military applications. Satellite navigation allows for hitherto impossible precision in the delivery of weapons to targets, greatly increasing their lethality whilst reducing inadvertent casualties from mis-directed weapons. Satellite navigation also allows forces to be directed and to locate themselves more easily, reducing the fog of war.

In these ways, satellite navigation can be regarded as a force multiplier. In particular, the ability to reduce unintended casualties has particular advantages for wars where public relations is an important aspect of warfare. For these reasons, a satellite navigation system is an essential asset for any aspiring military power.

GNSS systems have a wide variety of uses:
  • Navigation, ranging from personal hand-held devices for trekking, to devices fitted to cars, trucks, ships and aircraft
  • Time transfer and synchronization
  • Location-based services such as enhanced 911
  • Surveying
  • Entering data into a geographic information system
  • Search and rescue
  • Geophysical Sciences
  • Tracking devices used in wildlife management
  • Asset Tracking, as in trucking fleet management
  • Road Pricing
  • Location-based media
Note that the ability to supply satellite navigation signals is also the ability to deny their availability. The operator of a satellite navigation system potentially has the ability to degrade or eliminate satellite navigation services over any territory it desires.


CURRENT GLOBAL NAVIGATION SYSTEMS
"Gps"
The United States' Global Positioning System (GPS), which as of 2007 is the only fully functional, fully available global navigation satellite system.
It consists of up to 32 medium Earth orbit satellites in six different orbital planes, with the exact number of satellites varying as older satellites are retired and replaced.
Operational since 1978 and globally available since 1994, GPS is currently the world's most utilized satellite navigation system.

“Glonass”
The formerly Soviet, and now Russian, GLObal'naya NAvigatsionnaya Sputnikovaya Sistema, or GLONASS, was a fully functional navigation constellation but since the collapse of the Soviet Union has fallen into disrepair, leading to gaps in coverage and only partial availability.
The Russian Federation has pledged to restore it to full global availability by 2010 with the help of India, who is participating in the restoration project.


PROPOSED NAVIGATION SYSTEMS IRNSS
The Indian Regional Navigational Satellite System (IRNSS) is an autonomous regional satellite navigation system being developed by Indian Space Research Organisation which would be under the total control of Indian government.

The government approved the project in May 2006, with the intention of the system to be completed and implemented by 2012. It will consist of a constellation of 7 navigational satellites by 2012. All the 7 satellites will placed in the Geostationary orbit (GEO) to have a larger signal footprint and lower number of satellites to map the region.

It is intended to provide an absolute position accuracy of better than 20 meters throughout India and within a region extending approximately 2,000 km around it. A goal of complete Indian control has been stated, with the space segment, ground segment and user receivers all being built in India.

“Compass”

China has indicated they intend to expand their regional navigation system, called Beidou or Big Dipper, into a global navigation system; a program that has been called Compass in China's official news agency Xinhua. The Compass system is proposed to utilize 30 medium Earth orbit satellites and five geostationary satellites.
Having announced they are willing to cooperate with other countries in Compass's creation, it is unclear how this proposed program impacts China's commitment to the international Galileo position system.

"Doris"
Doppler Orbitography and Radio-positioning Integrated by Satellite (DORIS) is a French precision navigation system.

“Galileo”
The European Union and European Space Agency agreed on March 2002 to introduce their own alternative to GPS, called the Galileo positioning system. At a cost of about GBP £2.4 billion, the system is scheduled to be working from 2012.
The first experimental satellite was launched on 28 December 2005. Galileo is expected to be compatible with the modernized GPS system. The receivers will be able to combine the signals from both Galileo and GPS satellites to greatly increase the accuracy.

“QZSS”
The Quasi-Zenith Satellite System (QZSS), is a proposed three-satellite regional time transfer system and enhancement for GPS covering Japan. The first satellite is scheduled to be launched in 2008.


GNSS AUGMENTATION
GNSS Augmentation involves using external information, often integrated into the calculation process, to improve the accuracy, availability, or reliability of the satellite navigation signal.
There are many such systems in place and they are generally named or described based on how the GNSS sensor receives the information.

Some systems transmit additional information about sources of error (such as clock drift, ephemeris, or ionospheric delay), others provide direct measurements of how much the signal was off in the past, while a third group provide additional navigational or vehicle information to be integrated in the calculation process.

Examples of augmentation systems include the Wide Area Augmentation System, the European Geostationary Navigation Overlay Service, the Multi-functional Satellite Augmentation System, Differential GPS, and Inertial Navigation Systems.


LOW EARTH ORBIT SATELLITE PHONE NETWORKS
The two current operational low Earth orbit satellite phone networks are able to track transceiver units with accuracy of a few kilometers using Doppler shift calculations from the satellite. The coordinates are sent back to the transceiver unit where they can be read using AT commands or a graphical user interface.
This can also be used by the gateway to enforce restrictions on geographically bound calling plans.



source: en.wikipedia.com

Read more . . .

Internet Protocol

The Internet Protocol (IP) is a protocol used for communicating data across a packet-switched internetwork using the Internet Protocol Suite, also referred to as TCP/IP.

IP is the primary protocol in the Internet Layer of the Internet Protocol Suite and has the task of delivering distinguished protocol datagrams (packets) from the source host to the destination host solely based on their addresses.
For this purpose the Internet Protocol defines addressing methods and structures for datagram encapsulation.

The first major version of addressing structure, now referred to as Internet Protocol Version 4 (IPv4) is still the dominant protocol of the Internet, although the successor, Internet Protocol Version 6 (IPv6) is being actively deployed worldwide.


IP ENCAPSULATION
Data from an upper layer protocol is encapsulated as packets/datagrams (the terms are basically synonymous in IP). Circuit setup is not needed before a host may send packets to another host that it has previously not communicated with (a characteristic of packet-switched networks), thus IP is a connectionless protocol.

This is in contrast to Public Switched Telephone Networks that require the setup of a circuit before a phone call may go through (connection-oriented protocol).


SERVICES PROVIDED BY IP
Because of the abstraction provided by encapsulation, IP can be used over a heterogeneous network, i.e., a network connecting computers may consist of a combination of Ethernet, ATM, FDDI, Wi-Fi, token ring, or others.

Each link layer implementation may have its own method of addressing (or possibly the complete lack of it), with a corresponding need to resolve IP addresses to data link addresses. This address resolution is handled by the Address Resolution Protocol (ARP) for IPv4 and Neighbor Discovery Protocol (NDP) for IPv6.



RELIABILITY
The design principles of the Internet protocols assume that the network infrastructure is inherently unreliable at any single network element or transmission medium and that it is dynamic in terms of availability of links and nodes.

No central monitoring or performance measurement facility exists that tracks or maintains the state of the network. For the benefit of reducing network complexity, the intelligence in the network is purposely mostly located in the end nodes of each data transmission, cf. end-to-end principle. Routers in the transmission path simply forward packets to next known local gateway matching the routing prefix for the destination address.

As a consequence of this design, the Internet Protocol only provides best effort delivery and its service can also be characterized as unreliable. In network architectural language it is a connection-less protocol, in contrast to so-called connection-oriented modes of transmission.

The lack of reliability allows any of the following fault events to occur:

  • data corruption
  • lost data packets
  • duplicate arrival
  • out-of-order packet delivery; meaning, if packet 'A' is sent before packet 'B', packet 'B' may arrive before packet 'A'. Since routing is dynamic and there is no memory in the network about the path of prior packets, it is possible that the first packet sent takes a longer path to its destination.
The only assistance that the Internet Protocol provides in Version 4 (IPv4) is to ensure that the IP packet header is error-free through computation of a checksum at the routing nodes. This has the side-effect of discarding packets with bad headers on the spot. In this case no notification is required to be sent to either end node, although a facility exists in the Internet Control Message Protocol (ICMP) to do so.

IPv6, on the other hand, has abandoned the use of IP header checksums for the benefit of rapid forwarding through routing elements in the network.
The resolution or correction of any of these reliability issues is the responsibility of an upper layer protocol. For example, to ensure in-order delivery the upper layer may have to cache data until it can be passed to the application.

In addition to issues of reliability, this dynamic nature and the diversity of the Internet and its components provide no guarantee that any particular path is actually capable of, or suitable for performing the data transmission requested, even if the path is available and reliable.

One of the technical constraints is the size of data packets allowed on a given link. An application must assure that it uses proper transmission characteristics. Some of this responsibility lies also in the upper layer protocols between application and IP.

Facilities exist to examine the maximum transmission unit (MTU) size of the local link, as well as for the entire projected path to the destination when using IPv6. The IPv4 internetworking layer has the capability to automatically fragment the original datagram into smaller units for transmission.

In this case, IP does provide re-ordering of fragments delivered out-of-order.
Transmission Control Protocol (TCP) is an example of a protocol that will adjust its segment size to be smaller than the MTU. User Datagram Protocol (UDP) and Internet Control Message Protocol (ICMP) disregard MTU size thereby forcing IP to fragment oversized datagrams.[2]


IP ADDRESSING AND ROUTING
Perhaps the most complex aspects of IP are IP addressing and routing. Addressing refers to how end hosts become assigned IP addresses and how subnetworks of IP host addresses are divided and grouped together.

IP routing is performed by all hosts, but most importantly by internetwork routers, which typically use either interior gateway protocols (IGPs) or external gateway protocols (EGPs) to help make IP datagram forwarding decisions across IP connected networks


VERSION HISTORY
In May, 1974, the Institute of Electrical and Electronic Engineers (IEEE) published a paper entitled "A Protocol for Packet Network Interconnection."

The paper's authors, Vint Cerf and Bob Kahn, described an internetworking protocol for sharing resources using packet-switching among the nodes. A central control component of this model was the "Transmission Control Program" (TCP) that incorporated both connection-oriented links and datagram services between hosts.

The monolithic Transmission Control Program was later divided into a modular architecture consisting of the Transmission Control Protocol at the connection-oriented layer and the Internet Protocol at the internetworking (datagram) layer. The model became known informally as TCP/IP, although formally it was henceforth referenced as the Internet Protocol Suite.

The Internet Protocol is one of the determining elements that define the Internet. The dominant internetworking protocol (Internet Layer) in use today is IPv4; with number 4 assigned as the formal protocol version number carried in every IP datagram. IPv4 is described in RFC-791 (1981).

The successor to IPv4 is IPv6. Its most prominent modification from Version 4 is the addressing system. IPv4 uses 32-bit addresses (~4 billion, or ~4.3×109, addresses) while IPv6 uses 128-bit addresses (~340 undecillion, or ~3.4×1038 addresses). Although adoption of IPv6 has been slow, as of June 2008, all United States government systems have demonstrated basic infrastructure support for IPv6 (if only at the backbone level).

Version numbers 0 through 3 were development versions of IPv4 used between 1977 and 1979. Version number 5 was used by the Internet Stream Protocol (IST), an experimental stream protocol. Version numbers 6 through 9 were proposed for various protocol models designed to replace IPv4: SIPP (Simple Internet Protocol Plus, known now as IPv6), TP/IX (RFC 1475), PIP (RFC 1621) and TUBA (TCP and UDP with Bigger Addresses, RFC 1347). Version number 6 was eventually chosen as the official assignment for the successor Internet protocol, subsequently standardized as IPv6.

In 2004, a Chinese project called IPv9 was briefly mentioned in the press as a possible competitor to IPv6. The proposal had no affiliation with or support by any international standards body, and appears to have gained no traction even within China.[5]
A humorous Request for Comments that made an IPv9 protocol center of its storyline was published on April 1, 1994 by the IETF. It was intended as an April Fool's Day joke.


REFERENCE DIAGRAMS
Internet Protocol Suite in operation between two hosts connected via two routers and the corresponding layers used at each hop
Sample encapsulation of application data from UDP to a Link protocol frame



source: en.wikipedia.com

Read more . . .

Computer Network

A computer network is a group of interconnected computers. Networks may be classified according to a wide variety of characteristics. This article provides a general overview of some types and categories and also presents the basic components of a network.

A network is a collection of computers connected to each other. The network allows computers to communicate with each other and share resources and information.

The Advance Research Projects Agency (ARPA) designed "Advanced Research Projects Agency Network" (ARPANET) for the United States Department of Defense. It was the first computer network in the world in late 1960's and early 1970's.


NETWORK CLASSIFICATION
The following list presents categories used for classifying networks.

"Scale"
Based on their scale, networks can be classified as Local Area Network (LAN), Wide Area Network (WAN), Metropolitan Area Network (MAN), Personal Area Network (PAN), Virtual Private Network (VPN), Campus Area Network (CAN), Storage Area Network (SAN), etc.

"Connection method"
Computer networks can also be classified according to the hardware and software technology that is used to interconnect the individual devices in the network, such as Optical fiber, Ethernet, Wireless LAN, HomePNA, or Power line communication.

Ethernet uses physical wiring to connect devices. Frequently deployed devices include hubs, switches, bridges and/or routers.

Wireless LAN technology is designed to connect devices without wiring. These devices use radio waves or infrared signals as a transmission medium.

"Functional relationship(Network Architectures)"
Computer networks may be classified according to the functional relationships which exist among the elements of the network, e.g., Active Networking, Client-server and Peer-to-peer (workgroup) architecture.

"Network topology"
Computer networks may be classified according to the network topology upon which the network is based, such as Bus network, Star network, Ring network, Mesh network, Star-bus network, Tree or Hierarchical topology network.

Network Topology signifies the way in which devices in the network see their logical relations to one another. The use of the term "logical" here is significant. That is, network topology is independent of the "physical" layout of the network.

Even if networked computers are physically placed in a linear arrangement, if they are connected via a hub, the network has a Star topology, rather than a Bus Topology. In this regard the visual and operational characteristics of a network are distinct; the logical network topology is not necessarily the same as the physical layout.

Networks may be classified based on the method of data used to convey the data, these include digital and analog networks.


TYPES OF NETWORK
Below is a list of the most common types of computer networks in order of scale.

"Personal Area Network (PAN)"
A Personal Area Network (PAN) is a computer network used for communication among computer devices close to one person. Some examples of devices that are used in a PAN are printers, fax machines, telephones, PDAs and scanners. The reach of a PAN is typically about 20-30 feet (approximately 6-9 meters), but this is expected to increase with technology improvements.

"Local Area Network (LAN)"
A Local Area Network (LAN) is a computer network covering a small physical area, like a home, office, or small group of buildings, such as a school, or an airport. Current LANs are most likely to be based on Ethernet technology.

For example, a library may have a wired or wireless LAN for users to interconnect local devices (e.g., printers and servers) and to connect to the internet. On a wired LAN, PCs in the library are typically connected by category 5 (Cat5) cable, running the IEEE 802.3 protocol through a system of interconnected devices and eventually connect to the Internet.

The cables to the servers are typically on Cat 5e enhanced cable, which will support IEEE 802.3 at 1 Gbit/s. A wireless LAN may exist using a different IEEE protocol, 802.11b, 802.11g or possibly 802.11n.

The staff computers (bright green in the figure) can get to the color printer, checkout records, and the academic network and the Internet. All user computers can get to the Internet and the card catalog. Each workgroup can get to its local printer. Note that the printers are not accessible from outside their workgroup.

Typical library network, in a branching tree topology and controlled access to resources
All interconnected devices must understand the network layer (layer 3), because they are handling multiple subnets (the different colors).

Those inside the library, which have only 10/100 Mbit/s Ethernet connections to the user device and a Gigabit Ethernet connection to the central router, could be called "layer 3 switches" because they only have Ethernet interfaces and must understand IP. It would be more correct to call them access routers, where the router at the top is a distribution router that connects to the Internet and academic networks' customer access routers.

The defining characteristics of LANs, in contrast to WANs (wide area networks), include their higher data transfer rates, smaller geographic range, and lack of a need for leased telecommunication lines. Current Ethernet or other IEEE 802.3 LAN technologies operate at speeds up to 10 Gbit/s. This is the data transfer rate. IEEE has projects investigating the standardization of 100 Gbit/s, and possibly 40 Gbit/s.

"Campus Area Network (CAN)"
A Campus Area Network (CAN) is a computer network made up of an interconnection of local area networks (LANs) within a limited geographical area. It can be considered one form of a metropolitan area network, specific to an academic setting.

In the case of a university campus-based campus area network, the network is likely to link a variety of campus buildings including; academic departments, the university library and student residence halls. A campus area network is larger than a local area network but smaller than a wide area network (WAN), (in some cases).

The main aim of a campus area network is to facilitate students accessing internet and university resources. This is a network that connects two or more LANs but that is limited to a specific and contiguous geographical area such as a college campus, industrial complex, office building, or a military base.

A CAN may be considered a type of MAN (metropolitan area network), but is generally limited to a smaller area than a typical MAN. This term is most often used to discuss the implementation of networks for a contiguous area. This should not be confused with a Controller Area Network. A LAN connects network devices over a relatively short distance.

A networked office building, school, or home usually contains a single LAN, though sometimes one building will contain a few small LANs (perhaps one per room), and occasionally a LAN will span a group of nearby buildings. In TCP/IP networking, a LAN is often but not always implemented as a single IP subnet.

"Metropolitan Area Network (MAN)"
A Metropolitan Area Network (MAN) is a network that connects two or more Local Area Networks or Campus Area Networks together but does not extend beyond the boundaries of the immediate town/city. Routers, switches and hubs are connected to create a Metropolitan Area Network.

"Wide Area Network (WAN)"

A Wide Area Network (WAN) is a computer network that covers a broad area (i.e., any network whose communications links cross metropolitan, regional, or national boundaries). Less formally, a WAN is a network that uses routers and public communications links.

Contrast with personal area networks (PANs), local area networks (LANs), campus area networks (CANs), or metropolitan area networks (MANs) are usually limited to a room, building, campus or specific metropolitan area (e.g., a city) respectively.

The largest and most well-known example of a WAN is the Internet. A WAN is a data communications network that covers a relatively broad geographic area (i.e. one city to another and one country to another country) and that often uses transmission facilities provided by common carriers, such as telephone companies. WAN technologies generally function at the lower three layers of the OSI reference model: the physical layer, the data link layer, and the network layer.

"Global Area Network (GAN)"
A Global Area networks (GAN) specifications are in development by several groups, and there is no common definition. In general, however, a GAN is a model for supporting mobile communications across an arbitrary number of wireless LANs, satellite coverage areas, etc.

The key challenge in mobile communications is "handing off" the user communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial Wireless local area networks (WLAN).

"Virtual Private Network (VPN)"

A Virtual Private Network (VPN) is a computer network in which some of the links between nodes are carried by open connections or virtual circuits in some larger network (e.g., the Internet) instead of by physical wires. The link-layer protocols of the virtual network are said to be tunneled through the larger network when this is the case.

One common application is secure communications through the public Internet, but a VPN need not have explicit security features, such as authentication or content encryption. VPNs, for example, can be used to separate the traffic of different user communities over an underlying network with strong security features.

A VPN may have best-effort performance, or may have a defined service level agreement (SLA) between the VPN customer and the VPN service provider. Generally, a VPN has a topology more complex than point-to-point.

A VPN allows computer users to appear to be editing from an IP address location other than the one which connects the actual computer to the Internet.

"Internetwork"
A Internetworking involves connecting two or more distinct computer networks or network segments via a common routing technology. The result is called an internetwork (often shortened to internet).

Two or more networks or network segments connected using devices that operate at layer 3 (the 'network' layer) of the OSI Basic Reference Model, such as a router. Any interconnection among or between public, private, commercial, industrial, or governmental networks may also be defined as an internetwork.

In modern practice, the interconnected networks use the Internet Protocol. There are at least three variants of internetwork, depending on who administers and who participates in them:

  • Intranet
  • Extranet
  • Internet

Intranets and extranets may or may not have connections to the Internet. If connected to the Internet, the intranet or extranet is normally protected from being accessed from the Internet without proper authorization. The Internet is not considered to be a part of the intranet or extranet, although it may serve as a portal for access to portions of an extranet.

Intranet

An intranet is a set of networks, using the Internet Protocol and IP-based tools such as web browsers and file transfer applications, that is under the control of a single administrative entity.

That administrative entity closes the intranet to all but specific, authorized users. Most commonly, an intranet is the internal network of an organization. A large intranet will typically have at least one web server to provide users with organizational information.

Extranet
An extranet is a network or internetwork that is limited in scope to a single organization or entity but which also has limited connections to the networks of one or more other usually, but not necessarily, trusted organizations or entities (e.g. a company's customers may be given access to some part of its intranet creating in this way an extranet, while at the same time the customers may not be considered 'trusted' from a security standpoint).

Technically, an extranet may also be categorized as a CAN, MAN, WAN, or other type of network, although, by definition, an extranet cannot consist of a single LAN; it must have at least one connection with an external network.

Internet
The Internet is a specific internetwork. It consists of a worldwide interconnection of governmental, academic, public, and private networks based upon the networking technologies of the Internet Protocol Suite. It is the successor of the Advanced Research Projects Agency Network (ARPANET) developed by DARPA of the U.S. Department of Defense.

The Internet is also the communications backbone underlying the World Wide Web (WWW). The 'Internet' is most commonly spelled with a capital 'I' as a proper noun, for historical reasons and to distinguish it from other generic internetworks.

Participants in the Internet use a diverse array of methods of several hundred documented, and often standardized, protocols compatible with the Internet Protocol Suite and an addressing system (IP Addresses) administered by the Internet Assigned Numbers Authority and address registries.

Service providers and large enterprises exchange information about the reach ability of their address spaces through the Border Gateway Protocol (BGP), forming a redundant worldwide mesh of transmission paths.


BASIC HARDWARE COMPONENTS
All networks are made up of basic hardware building blocks to interconnect network nodes, such as Network Interface Cards (NICs), Bridges, Hubs, Switches, and Routers. In addition, some method of connecting these building blocks is required, usually in the form of galvanic cable (most commonly Category 5 cable). Less common are microwave links (as in IEEE 802.12) or optical cable ("optical fiber"). An Ethernet card may also be required.

"Network Interface Cards"
A network card, network adapter or NIC (network interface card) is a piece of computer hardware designed to allow computers to communicate over a computer network. It provides physical access to a networking medium and often provides a low-level addressing system through the use of MAC addresses. It allows users to connect to each other either by using cables or wirelessly. The NIC provides the transfer of data in megabytes.

"Repeaters"
A repeater is an electronic device that receives a signal and retransmits it at a higher power level, or to the other side of an obstruction, so that the signal can cover longer distances without degradation. In most twisted pair Ethernet configurations, repeaters are required for cable runs longer than 100 meters away from the computer.

"Hubs"
A hub contains multiple ports. When a packet arrives at one port, it is copied to all the ports of the hub for transmission. When the packets are copied, the destination address in the frame does not change to a broadcast address. It does this in a rudimentary way: It simply copies the data to all of the Nodes connected to the hub.

"Bridges"
A network bridge connects multiple network segments at the data link layer (layer 2) of the OSI model. Bridges do not promiscuously copy traffic to all ports, as hubs do, but learn which MAC addresses are reachable through specific ports. Once the bridge associates a port and an address, it will send traffic for that address only to that port. Bridges do send broadcasts to all ports except the one on which the broadcast was received.

Bridges learn the association of ports and addresses by examining the source address of frames that it sees on various ports. Once a frame arrives through a port, its source address is stored and the bridge assumes that MAC address is associated with that port. The first time that a previously unknown destination address is seen, the bridge will forward the frame to all ports other than the one on which the frame arrived.

Bridges come in three basic types:
  1. Local bridges: Directly connect local area networks (LANs)
  2. Remote bridges: Can be used to create a wide area network (WAN) link between LANs. Remote bridges, where the connecting link is slower than the end networks, largely have been replaced by routers.
  3. Wireless bridges: Can be used to join LANs or connect remote stations to LANs.

"Switches"
A switch is a device that forwards and filters OSI layer 2 datagram’s (chunk of data communication) between ports (connected cables) based on the MAC addresses in the packets. This is distinct from a hub in that it only forwards the datagram’s to the ports involved in the communications rather than all ports connected. Strictly speaking, a switch is not capable of routing traffic based on IP address (layer 3) which is necessary for communicating between network segments or within a large or complex LAN.

Some switches are capable of routing based on IP addresses but are still called switches as a marketing term. A switch normally has numerous ports, with the intention being that most or all of the network is connected directly to the switch, or another switch that is in turn connected to a switch.

Switch is a marketing term that encompasses routers and bridges, as well as devices that may distribute traffic on load or by application content (e.g., a Web URL identifier). Switches may operate at one or more OSI model layers, including physical, data link, network, or transport (i.e., end-to-end). A device that operates simultaneously at more than one of these layers is called a multilayer switch.

Overemphasizing the ill-defined term "switch" often leads to confusion when first trying to understand networking. Many experienced network designers and operators recommend starting with the logic of devices dealing with only one protocol level, not all of which are covered by OSI. Multilayer device selection is an advanced topic that may lead to selecting particular implementations, but multilayer switching is simply not a real-world design concept.

"Routers"
Routers are networking devices that forward data packets between networks using headers and forwarding tables to determine the best path to forward the packets. Routers work at the network layer of the TCP/IP model or layer 3 of the OSI model. Routers also provide interconnectivity between like and unlike media (RFC 1812).

This is accomplished by examining the Header of a data packet, and making a decision on the next hop to which it should be sent (RFC 1812) They use preconfigured static routes, status of their hardware interfaces, and routing protocols to select the best route between any two subnets.

A router is connected to at least two networks, commonly two LANs or WANs or a LAN and its ISP's network. Some DSL and cable modems, for home (and even office) use, have been integrated with routers to allow multiple home/office computers to access the Internet through the same connection. Many of these new devices also consist of wireless access points (waps) or wireless routers to allow for IEEE 802.11g/b/n wireless enabled devices to connect to the network without the need for cabled connections.



source: en.wikipedia.com

Read more . . .

Kamis, 22 Januari 2009

Web Browser

A Web browser is a software application which enables a user to display and interact with text, images, videos, music, games and other information typically located on a Web page at a Web site on the World Wide Web or a local area network.

Text and images on a Web page can contain hyperlinks to other Web pages at the same or different Web site. Web browsers allow a user to quickly and easily access information provided on many Web pages at many Web sites by traversing these links. Web browsers format HTML information for display, so the appearance of a Web page may differ between browsers.

Web browsers are the most-commonly-used type of HTTP user agent. Although browsers are typically used to access the World Wide Web, they can also be used to access information provided by Web servers in private networks or content in file systems.

HISTORY
The history of the Web browser dates back to late 1980s, when a variety of technologies laid the foundation for the first Web browser, the WorldWideWeb, by Tim Berners-Lee in 1991. That browser brought together a variety of existing and new software and hardware technologies.

Over the following years, Web browsers were introduced by companies like Mozilla, Netscape, Microsoft, Apple, and Opera. More recently, Google entered the browser market.

CURRENT WEB BROWSERS
Some of the Web browsers currently available for personal computers include Internet Explorer, Mozilla Firefox, Safari, Opera, Avant Browser, Konqueror, Lynx, Google Chrome, Maxthon, Flock, Arachne, Epiphany, K-Meleon and AOL Explorer.

PROTOCOLS & STANDARDS
Web browsers communicate with Web servers primarily using Hypertext Transfer Protocol (HTTP) to fetch Web pages. HTTP allows Web browsers to submit information to Web servers as well as fetch Web pages from them.

The most-commonly-used version of HTTP is HTTP/1.1, which is fully defined in RFC 2616. HTTP/1.1 has its own required standards that Internet Explorer does not fully support, but most other current-generation Web browsers do.

Pages are located by means of a URL (Uniform Resource Locator, RFC 1738), which is treated as an address, beginning with http: for HTTP transmission.

Many browsers also support a variety of other URL types and their corresponding protocols, such as gopher: for Gopher (a hierarchical hyperlinking protocol), ftp: for File Transfer Protocol (FTP), rtsp: for Real-time Streaming Protocol (RTSP), and https: for HTTPS (HTTP Secure, which is HTTP augmented by Secure Sockets Layer or Transport Layer Security).

The file format for a Web page is usually HTML (HyperText Markup Language) and is identified in the HTTP protocol using a MIME content type. Most browsers natively support a variety of formats in addition to HTML, such as the JPEG, PNG and GIF image formats, and can be extended to support more through the use of plugins.

The combination of HTTP content type and URL protocol specification allows Web-page designers to embed images, animations, video, sound, and streaming media into a Web page, or to make them accessible through the Web page.

Early Web browsers supported only a very simple version of HTML. The rapid development of proprietary Web browsers led to the development of non-standard dialects of HTML, leading to problems with Web interoperability.

Modern Web browsers support a combination of standards-based and de facto HTML and XHTML, which should be rendered in the same way by all browsers.
No browser fully supports HTML 4.01, XHTML 1.x or CSS 2.1 yet. Many sites are designed using WYSIWYG HTML-generation programs such as Adobe Dreamweaver or Microsoft FrontPage.

Microsoft FrontPage often generates non-standard HTML by default, hindering the work of the W3C in promulgating standards, specifically with XHTML and Cascading Style Sheets (CSS), which are used for page layout.
Dreamweaver and other more modern Microsoft HTML development tools such as Microsoft Expression Web and Microsoft Visual Studio conform to the W3C standards.

Some of the more popular browsers include additional components to support Usenet news, Internet Relay Chat (IRC), and e-mail. Protocols supported may include Network News Transfer Protocol (NNTP), Simple Mail Transfer Protocol (SMTP), Internet Message Access Protocol (IMAP), and Post Office Protocol (POP).
These browsers are often referred to as "Internet suites" or "application suites" rather than merely Web browsers.


Source: en.wikipedia.org

Read more . . .

Web Portal

A web portal presents information from diverse sources in a unified way. Apart from the search engine standard, web portals offer other services such as e-mail, news, stock prices, infotainment, and other features.

Portals provide a way for enterprises to provide a consistent look and feel with access control and procedures for multiple applications, which otherwise would have been different entities altogether. An example of a web portal is MSN.


HISTORY

In the late 1990s the web portal was a hot commodity. After the proliferation of web browsers in the mid-1990s many companies tried to build or acquire a portal, to have a piece of the Internet market. The web portal gained special attention because it was, for many users, the starting point of their web browser.

Netscape became a part of America Online, the Walt Disney Company launched Go.com, and Excite and @Home became a part of AT&T during the late 1990s. Lycos was said to be a good target for other media companies such as CBS.

Many of the portals started initially as either web directories (notably Yahoo!) or search engines (Excite, Lycos, AltaVista, infoseek, Hotbot were among the earliest). Expanding services was a strategy to secure the user-base and lengthen the time a user stayed on the portal.

Services which require user registration such as free email, customization features, and chatrooms were considered to enhance repeat use of the portal. Game, chat, email, news, and other services also tend to make users stay longer, thereby increasing the advertising revenue.

The portal craze, with "old media" companies racing to outbid each other for Internet properties, died down with the dot-com flameout in 2000 and 2001. Disney pulled the plug on Go.com, Excite went bankrupt and its remains were sold to iWon.com. Some portal sites such as Yahoo! remain successful. Added Now..........


KINDS OF PORTAL

Two broad categorizations of portals are Horizontal portals (e.g. Yahoo) and Vertical portals (or vortals, focused on one functional area, e.g. salesforce.com).

"Personal portals"
A personal portal is a site on the World Wide Web that typically provides personalized capabilities to its visitors, providing a pathway to other content.

It is designed to use distributed applications, different numbers and types of middleware and hardware to provide services from a number of different sources.

In addition, business portals are designed to share collaboration in workplaces. A further business-driven requirement of portals is that the content be able to work on multiple platforms such as personal computers, personal digital assistants (PDAs), and cell phones/mobile phones.
A personal or web portal can be integrated with many forum systems.

"Regional web portals"
Along with the development and success of international personal portals such as Yahoo!, regional variants have also sprung up. Some regional portals contain local information such as weather forecasts, street maps and local business information.

Another notable expansion over the past couple of years is the move into formerly unthinkable markets.

"Local content - global reach" portals have emerged not only from countries like Korea (Naver), India (Rediff), China (Sina.com), Romania (Neogen.ro), Greece (in.gr) and Italy (Webplace.it), but in countries like Vietnam where they are very important for learning how to apply e-commerce, e-government, etc. Such portals reach out to the widespread diaspora across the world.

"Government web portals"
At the end of the dot-com boom in the 1990s, many governments had already committed to creating portal sites for their citizens. In the United States the main portal is USA.gov in English and GobiernoUSA.gov in Spanish, in addition to portals developed for specific audiences such as DisabilityInfo.gov; in the United Kingdom the main portals are Directgov (for citizens) and businesslink.gov.uk (for businesses).

Many U.S. states have their own portals which provide direct access to e-commerce applications, agency and department web sites, and more specific information about living in, doing business in and getting around the state. Some U.S. states have chosen to out-source the operation of their portals to third-party vendors.

The National Portal of India provides comprehensive information about India and its various facets.

One of the issues that come up with government web portals is that different agencies often have their own portals and sometimes a statewide portal-directory structure is not sophisticated and deep enough to meet the needs of multiple agencies.

"Corporate web portals"
Corporate intranets became common use during the 1990s. Having access to company information via a web browser ushered in new way of working. As intranets grew in size and complexity, webmasters were faced with increasing content and user management challenges.

A consolidated view of company information was judged insufficient, users wanted personalization and customization. Webmasters, if skilled enough, were able to offer some capabilities, but for the most part ended up driving users away from using the intranet.

Many companies began to offer tools to help webmasters manage their data, applications and information more easily, and through personalized views. Some portal solutions today are able to integrate legacy applications, other portals objects, and handle thousands of user requests.

Today’s corporate portals offer extended capabilities for businesses: workflow management, collaboration between work groups, and policy-managed content publication.
In addition, most portal solutions today can allow internal and external access to specific corporate information using secure authentication or Single sign-on.

JSR168 Standards emerged around 2001. Java Specification Request (JSR) 168 standards allow the interoperability of portlets across different portal platforms. These standards allow portal developers, administrators and consumers to integrate standards-based portals and portlets across a variety of vendor solutions.

The concept of content aggregation seems to still gain momentum and portal solution will likely continue to evolve significantly over the next few years. The Gartner Group predicts generation 8 portals to expand on the enterprise mash-up concept of delivering a variety of information, tools, applications and access points through a single mechanism.

With the increase in user generated content, disparate data silos, and file formats, information architects and taxonomist will be required to allow users the ability to tag (classify) the data. This will ultimately cause a ripple effect where users will also be generating ad hoc navigation and information flows.

Some useful lessons can be learned from web 2.0 applications such as Netvibes, PageFlakes, Protopage and a new breed of competitors, such as PersonAll, use this angle to enter the market.

"Hosted web portals"
As corporate portals gained popularity a number of companies began offering them as a hosted service. The hosted portal market fundamentally changed the composition of portals. In many ways they served simply as a tool for publishing information instead of the loftier goals of integrating legacy applications or presenting correlated data from distributed databases.

The early hosted portal companies such as Hyperoffice.com or the now defunct InternetPortal.com focused on collaboration and scheduling in addition to the distribution of corporate data. As hosted web portals have risen in popularity their feature set has grown to include hosted databases, document management, email, discussion forums and more.

Hosted portals automatically personalize the content generated from their modules to provide a personalized experience to their users. In this regard they have remained true to the original goals of the earlier corporate web portals.

"Domain-specific portals"
A number of portals have come about that are specific to the particular domain, offering access to related companies and services, a prime example of this trend would be the growth in property portals that give access to services such as estate agents, removal firm, and solicitors that offer conveyancing.

A number of portals have come about that are specific to the particular domain, offering access to related companies and services. Along the same lines, industry-specific news and information portals have appeared.

"Sports portals"
Web portals have also expanded into the professional sports market. Fans of sports teams create a Sportal (sports portal), which brings all information about a professional sports team to one web portal.


Source: en.wikipedia.org

Read more . . .

Website

A web site is a collection of Web pages, images, videos or other digital assets that is hosted on one or more web servers, usually accessible via the Internet.

A Web page is a document, typically written in (X)HTML, that is almost always accessible via HTTP, a protocol that transfers information from the Web server to display in the user's Web browser.

All publicly accessible websites are seen collectively as constituting the "World Wide Web".
The pages of a website can usually be accessed from a common root URL called the homepage, and usually reside on the same physical server.

The URLs of the pages organize them into a hierarchy, although the hyperlinks between them control how the reader perceives the overall structure and how the traffic flows between the different parts of the site.

Some websites require a subscription to access some or all of their content. Examples of subscription sites include many business sites, parts of many news sites, academic journal sites, gaming sites, message boards, Web-based e-mail, services, social networking websites, and sites providing real-time stock market data. Because they require authentication to view the content they are technically an Intranet site.


HISTORY
The World Wide Web was created in 1990 by CERN engineer, Tim Berners-Lee. On 30 April 1993, CERN announced that the World Wide Web would be free to anyone.

Before the introduction of HTML and HTTP other protocols such as file transfer protocol and the gopher protocol were used to retrieve individual files from a server.

These protocols offer a simple directory structure which the user navigates and chooses files to download. Documents were most often presented as plain text files without formatting or were encoded in word processor formats.


OVERVIEW
Organized by function a website may be:

  • a personal website
  • a commercial website
  • a government website
  • a non-profit organization website
It could be the work of an individual, a business or other organization, and is typically dedicated to some particular topic or purpose. Any website can contain a hyperlink to any other website, so the distinction between individual sites, as perceived by the user, may sometimes be blurred.

Websites are written in, or dynamically converted to, HTML (Hyper Text Markup Language) and are accessed using a software interface classified as an user agent. Web pages can be viewed or otherwise accessed from a range of computer-based and Internet-enabled devices of various sizes, including desktop computers, laptops, PDAs and cell phones.

A website is hosted on a computer system known as a web server, also called an HTTP server, and these terms can also refer to the software that runs on these systems and that retrieves and delivers the Web pages in response to requests from the website users.
Apache is the most commonly used Web server software (according to Netcraft statistics) and Microsoft's Internet Information Server (IIS) is also commonly used.


WEBSITE STYLE

"Static Website"

A Static Website is one that has web pages stored on the server in the same form as the user will view them. It is primarily coded in HTML (Hyper-text Markup Language).

A static website is also called a Classic website, a 5-page website or a Brochure website because it simply presents pre-defined information to the user. It may include information about a company and its products and services via text, photos, Flash animation, audio/video and interactive menus and navigation.

This type of website usually displays the same information to all visitors, thus the information is static. Similar to handing out a printed brochure to customers or clients, a static website will generally provide consistent, standard information for an extended period of time.

Although the website owner may make updates periodically, it is a manual process to edit the text, photos and other content and may require basic website design skills and software.

In summary, visitors are not able to control what information they receive via a static website, and must instead settle for whatever content the website owner has decided to offer at that time.

They are edited using four broad categories of software:
  • Text editors, such as Notepad or TextEdit, where the HTML is manipulated directly within the editor program
  • WYSIWYG offline editors, such as Microsoft FrontPage and Adobe Dreamweaver (previously Macromedia Dreamweaver), where the site is edited using a GUI interface and the underlying HTML is generated automatically by the editor software
  • WYSIWYG Online editors, where the any media rich online presentation like websites, widgets, intro, blogs etc. are created on a flash based platform.
  • Template-based editors, such as Rapidweaver and iWeb, which allow users to quickly create and upload websites to a web server without having to know anything about HTML, as they just pick a suitable template from a palette and add pictures and text to it in a DTP-like fashion without ever having to see any HTML code.

"Dynamic website"
A Dynamic Website is one that does not have web pages stored on the server in the same form as the user will view them.
Instead, the web page content changes automatically and/or frequently based on certain criteria. It generally collates information on the hop each time a page is requested.

A website can be dynamic in one of two ways. The first is that the web page code is constructed dynamically, piece by piece. The second is that the web page content displayed varies based on certain criteria. The criteria may be pre-defined rules or may be based on variable user input.

The main purpose behind a dynamic site is that it is much simpler to maintain a few web pages plus a database than it is to build and update hundreds or thousands of individual web pages and links.

In one way, a data-driven website is similar to a static site because the information that is presented on the site is still limited to what the website owner has allowed to be stored in the database (data entered by the owner and/or input by users and approved by the owner).

The advantage is that there is usually a lot more information stored in a database and made available to users. A dynamic website also describes its construction or how it is built, and more specifically refers to the code used to create a single web page.

A Dynamic Web Page is generated on the fly by piecing together certain blocks of code, procedures or routines. A dynamically-generated web page would call various bits of information from a database and put them together in a pre-defined format to present the reader with a coherent page.

It interacts with users in a variety of ways including by reading cookies recognizing users' previous history, session variables, server side variables etc., or by using direct interaction (form elements, mouseovers, etc.).

A site can display the current state of a dialogue between users, monitor a changing situation, or provide information in some way personalized to the requirements of the individual user.
Some countries, for example the U.K. have introduced legislation regarding web accessibility

"Software systems"
There are a wide range of software systems, such as Java Server Pages (JSP), the PHP and Perl programming languages, Active Server Pages (ASP), YUMA and ColdFusion (CFM) that are available to generate dynamic Web systems and dynamic sites. Sites may also include content that is retrieved from one or more databases or by using XML-based technologies such as RSS.

Static content may also be dynamically generated either periodically, or if certain conditions for regeneration occur (cached) in order to avoid the performance loss of initiating the dynamic engine on a per-user or per-connection basis.

Plugins are available to expand the features and abilities of Web browsers, which use them to show active content, such as Flash, Shockwave or applets written in Java.

Dynamic HTML also provides for user interactivity and realtime element updating within Web pages (i.e., pages don't have to be loaded or reloaded to effect any changes), mainly using the DOM and JavaScript, support which is built-in to most modern Web browsers.

Turning a website into an income source is a common practice for web-developers and website owners.

There are several methods for creating a website business which fall into two broad categories, as defined below:

  1. Content based sitesSome websites derive revenue by selling advertising space on the site (see contextual ads).
  2. Product or service based sites
Some websites derive revenue by offering products or services. In the case of e-commerce websites, the products or services may be purchased at the website itself, by entering credit card or other payment information into a payment form on the site.

While most business websites serve as a shop window for existing brick and mortar businesses, it is increasingly the case that some websites are businesses in their own right; that is, the products they offer are only available for purchase on the web.

Websites occasionally derive income from a combination of these two practices. For example, a website such as an online auctions website may charge the users of its auction service to list an auction, but also display third-party advertisements on the site, from which it derives further income.


Source: en.wikipedia.org

Read more . . .

Jumat, 16 Januari 2009

Ekstensi File

Tau nggak apa itu "ekstensi file" ato bahasa bulenya "extensions file" ?? kalo nggak tao baca dong ampe habis blog ini . . .
Ekstensi file adalah Bla Bla Bla Bla (pokoknya ribet dah ngejelasinnya . . .) tapi biar saya jelaskan dengan cara yang semudah-mudahnya yang bisa saya sampaikan pada anda bagaimana menjelaskan "Apa itu Ekstensi File".

Ekstensi file juga bisa disebut dengan "format/tipe dari file-file" yang ada di dalam komputer anda.









Pernahkah anda melihat atau mungkin mendengar file .mp3 .jpg .rar .doc ato sebagainya ?? itu merupakan ekstensi dari beberapa file atau bisa dibilang "Marga" dari "Penduduk/Masyarakat Komputer"(File) anda.


Contohnya seperti pada semua file dari program Winrar mempunyai nama file yang berujung ".rar" atau pada semua file Ms. Word yang mempunyai nama file yang berujung ".doc".
Itulah yang disebut dengan Ekstensi dari sebuah File (extensions file)!!! Masih belum ngerti juga ??? Baca Lagi . . .

Pada semua komputer(yang blom diutak-atik), ekstensi file tersebut tidak diperlihatkan atau disembunyikan(dah dari sononya . . . ato pengaturan default).

Tapi nggak usah khawatir ataupun cemas apalagi sampe Stress!!! Tenang-tenang . . . Tarik napas dalam-dalam (jangan lupa di keluarin juga) dan ikuti Cara-Cara berikut ini:


1. Buka "My Computer"

2. Buka "
Folders options" pada Menu "Tools"












3.
Setelah Menu "Folder options" terbuka, Buka kembali Tab Menu "View" dan hilangkan tanda centang pada "Hide extensions for know file types"


















4.
Klik "OK" dan lihat kembali file-file anda pasti ekstensi filenya sudah kelihatan.
kalo belum keliatan ulangi lagi, tapi kalo nggak mau juga mungkin komputer anda sudah terserang virus atau semacamnya.(Baca "xxxxxxxxxxxxxxxxxxxx")

Read more . . .

  © Free Blogger Templates Spain by Ourblogtemplates.com 2008

Back to TOP