Friday, March 30, 2012

CDMA, HSDPA, UMTS, GSM, WCDMA, HSUPA

What HSDPA & HSUPA?




HSDPA vs HSUPA
HSPA (High Speed Packet Access), commonly referred to as 3.5G, is an upgrade to WCDMA networks that allows for much higher data speeds for internet connectivity. There are two aspects to this technology and each is more or less independent of the other. HSDPA (High Speed Downlink Packet Access) is the one that improves the downlink of the data transmission while HSUPA (High Speed Uplink Packet Access) is the one that improves the uplink or transmission from the mobile device to the network.
Common practices that would be affected by having HSDPA include watching online videos, browsing sites, downloading files, and a lot more. If you usually send emails with large attachments, upload files to sites, or seed files in a file sharing network, then HSUPA would improve the speed at which you do your tasks.

CDMA /WCDMA
CDMA (code-division multiple access) refers to any of several protocols used in so-called second-generation (2G) and third-generation (3G) wireless communications. As the term implies, CDMA is a form of multiplexing, which allows numerous signals to occupy a single transmission channel, optimizing the use of available bandwidth. The technology is used in ultra-high-frequency (UHF) cellular telephone systems in the 800-MHz and 1.9-GHz bands.

GSM
Radio spectrum is very limited, that�s why we have only 10-25MHz dedicated to wireless communication. Such narrow bandwidth allows 100-400 channels of reasonable quality, which is not rational and commercially not profitable to develop network for such small number of mobile subscribers. Genius idea lead to division of the whole geographical area to relatively small cells, and each cell may reuse the same frequencies by reducing power of transmission. Each cell has its own antenna (base station), and all base stations are interconnected using microwave or cable communication.



History 

Once upon a time there was analog cellular communication that didn�t support encryption, compression, and ISDN compatibility; in addition each country (company) developed its own system, which was incompatible with everyone else�s in equipment and operation.
So, in early 80s Europeans realized that pan-European public mobile system should be developed. The new system had to meet certain criteria:
  • Good subjective speech quality
  • Low terminal and service cost
  • International roaming
  • ISDN compatibility
  • Digital

GSM Network Architecture

UMTS

UMTS is one of the Third Generation (3G) mobile systems being developed within the ITU's IMT-2000 framework. It is a realisation of a new generation of broadband multi-media mobile telecommunications technology. The coverage area of service provision is to be world wide in the form of FLMTS (Future Land Mobile Telecommunications Services and now called IMT2000). The coverage will be provided by a combination of cell sizes ranging from 'in building' Pico Cells to Global Cells provided by satellite, giving service to the remote regions of the world. The UMTS is not a replacement of 2nd generation technologies (e.g. GSM, DCS1800, CDMA, DECT etc.), which will continue to evolve to their full potential.



STEP2SOLUTIONS








Wednesday, March 21, 2012

Computer Fundamental's


COMPUTER: -----------
Computer is an Electronic device which is used to compute. It is basically used for process the input data given by a user. The character by character meaning of computer is given below………  

C:-  Common
O:- Operating
M:- Machine
P:-  Particularly
U:- Uses for
T:- Training
E:- Entertainment/Education and
R:-Research

Basic functions of a computer:--
Computer is used in a lot of areas and it performs a lot of works but the basic working of a computer is same for all.
         It always receives input (Raw data) from user and after processing according to the user’s instructions it provides output(useful information).the whole processing is given below------


PARTS OF A COMPUTER

A) Hardware
B) Software

A)    Hardware:-
All the devices of a computer that can be physically accessed by a user or those devices that we can touch are called hardware. It can be divided in following units-- --------------
1)    Input devices:-
                  keyboard, mouse, joystick etc.
2)    Output devices:-
                           Monitor  and Printer.
3)    Processing devices:-
                 Processor, Motherboard
4)    Storage devices:-
                 Hard disk, RAM.


SOFTWARE:-
Software is a collection of interrelated programs for any specific task. It provides us a platform to interact with the hardware of computer system. Software cannot be touched.

SYSTEM SOFTWARE:-
It is a set of programs used to interact between user and computer hardware. System software is the basic s/w that is necessary to use to computer. Without system software, we cannot use hardware. We can divide system software into four parts.
1).Operating system (OS)
2). Language translator
3). Device driver
4). Utility programs

1).Operating system (OS):-
Operating system is a system s/w works as an intermediary between the user and hardware of computer system. It exists on the top layer of h/w and provides us an interface or virtual machine which is more convenient to use than a bare h/w machine.
 FUNCTIONS OF AN OS:--
A). Memory management
B). Process management
C). File management
D). Security
E). Command Interpretation

A). Memory Management:-
The memory management module of an operating system takes care of data transfer between both primary and secondary memory of a computer system.
          It is responsible for managing both memories primary and secondary in the sense that primary memory will be used only for current works and secondary memory will always be used for saved work.

B). PROCESS MANAGEMENT:-
Process means—a program in execution, so the process management module of an operating system takes care of execution, and termination of different types of process in computer system.
                   As we know all programs are installed in secondary storage but when we call any program, it is loaded into RAM as a process by OS.

C). File management:-
The file management module of an OS takes
Care of all file related activities like creating file, editing file, deleting, sorting, merging etc.
                  As we know all files that we have not saved are loaded into RAM but after giving a command to save a file it will saved in secondary storage.

D) Security: -
Each Operating system has in build feature of providing security from unauthorized access. For security, it provides a user name and password to login in computer. So a user with no username and password cannot access the system.  

E) Command Interpretation:-
Each operating system has its own interface for the users either it will be user friendly interface or a more difficult interface. So there are two main interfaces that usually all OS provides…………………..
1) GUI--- Graphical User Interface
2) CLI ----Command Line Interface

2).Language translators:-
Language translators are system software used to translate the instructions given in user language into machine language and then translating the output given in machine language into user language.


There are three types of language translators-
(1)           Assembler
(2)           Compiler
(3)           Interpreter

Languages used by a computer:--

Computer uses different types of languages, some of them are given below…………….
A)              Machine language
B)              Assembly language
C)              High – level language

A)       MACHINE LANGUAGE:-
Machine language is a language that can be understood only by a machine.
                  A machine can only understand ‘1’ and ‘0’ (binary language), but it is very difficult to understand by us.

B)         ASSEMBLY LANGUAGE:-
Assembly language was created to remove the complications of binary language. It was the first computer language created by human. Some characteristics are given below….
1)               Basic language of computer
2)               Machine dependent language (every instruction can be run only in that P.C for which it is written.)

C)         HIGH LEVEL LANGUAGE:
BASIC, PASCAL, C, C++, JAVA, FORTRAN, HTML are the high-level languages used now a days to instruct the PC. Some basic characteristics are given below…..

(a) Machine independent language.
(b) Very easy to learn.
(c) Very easy to mange.

3)         Device drivers----
Device drivers are system software used for a specific hardware to tell the operating system about the working of that particular hardware.
                    Each hardware should have a device driver program installed into the operating system.

4)         Utility programs:------
Utility programs are system software that assist a user in system maintenance tasks like scanning system for viruses, data recovery, backup data, disk formatting etc.
Example: - Antivirus, backup s/w, recovery tools, disk formatting s/w etc.

APPLICATION S/W: -----
Application s/w is system S/w that is used to done some specific task such as:-image editing, designing, webpage, designing, listening music etc.
Support system S/w as operating system.
Example: - Microsoft office, Photoshop, Corel, Mp3 player, video player, Flash, Dreamweaver, Sound forge   , 3Dmax etc…


Icon:-The representation of Programs through a small image is called Icon.
Desk Top:-It is the palaces which appears after the o/s fully lode into RAM. It contain ‘start menu’-A way to open all programme.
Task Bar: - It is a place like a blue line below the desktop is shows the entire program currently running on system.
My Computer: - Exist in desktop. To open all files, directories devise in a computer.
My Documents: - default places for save documents or files.                                             
Recycle bin  :-Place where are deleted files are exist when we delete a file from computer it automatically shifted into recycle bin. It is fully deleted from system when we delete files from recycle-bin.

PROCESSOR:-It is a brain of a computer. It is responsible for all processing on given data. It controls all other H/w and S/w application through control unit and process according to given instruction and sends it to Output devices.

STORAGE UNIT:-That are used to storage a bit of information (data) called storage devices.

Primary storage:-Temporary storage DATA is saved for a little time.
RAM:-(random access memory) It is a part of Primary .The current Process are running into a Ram. If we do not save our work than it will lost when the system turn off.

Two types of Ram:-
(1)Static Ram
(2)Dynamic Ram

ROM:-It is part of Primary storage that is used only to read the data or information stored in it, we can’t change or over write these saved data
There Types of Rom:-
(1)P-Rom- Programmable Read only memory.
(2)Ep-Rom-Erasable Programmable Read only memory.

(3)EEp-Rom-Electrically Erasable Programmable Read only memory.


CACHE Memory
:-
The part of Primary storage machine of using static Ram Internal memory of Micro Processor. It is also called Duffer memory. Processor Process all the tasks into it cache memory. Speed of Cache memory is very fast.

Secondary Storage:- It is Piermont storage of a P.c. We can call it Non-volatile memory. It mean we save any file in this storage. It will not lose after the system turn off.

PHERI PHERALS :-
( 1) Mother board
(2)CD-ROM/CD
(3)FLOPPY drive
(4)SMPS(switch mode power supply)

Mother Board: - Main boad, all input, output, processing storage devices are connected to Mother Board. It provides a separate interface for all type of devices.

CLASSIFICATION OF COMPUTER

(1)PERSONAL COMPUTER:-A small, single user computer based on a micro-processer.
(2)Workstation:-A power full, single user computer. It is like P.C but it has more powerful micro processor and higher quality monitor.
(3)Minicomputer: - A multi-user computer capable of supporting form 10 to hundred of user simultaneously.
(4)Mini frame: - A powerful multi-user computer capable of supporting many hundreds or thousands of users simultaneously. Popular mainframe series are MEDIA, Sperry, DEC, IBM, HP
HCL etc
(5)SUPERCOMPUTER: - An extremely fast computer that can perform hundred of millions of instructions per second. These are a amongst the fastest machines in terms of processing speed and use multiprocessing techniques, where a number of processors are used to solve a problem. Supercomputer is mainly being used for weather forecasting, computational fluid dynamics, remote sensing, image processing, bio-medical applications etc.
KEYBOARD: - Keyboard is a typewriter-like device, which contains keys of feed information into the computer. In general, keyboard is available in two models. The standard keyboard With 83-84 keys; and enhanced keyboard with 104keys or more. The keys of a standard computer keyboard along with their function are described below:
*Typewriter keys:-These are normal keys to the keyboard. They include letters, numbers, and punctuation symbols.
*Function keys:-These are, labeled F1toF12, are located at the top end of the keyboard. The functions they perform depend on the software that you use.
*Cursor Control keys:-These keys marked as are called the Left, Right, Up and Down Arrow keys, respectively.

Printer:-A device that prints image (numbers, alphabets, graphs, etc.) on paper is know as printers. After creating a document on the computer, you can send it to the printer for printing its hard-copy which is generally called a printout. The speed of a printer is rated either by pages per minute (ppm) or by characters per second (cps).you can take printout in full colors in black color only.  
CLASSIFICATION OF Printer
*Dot Matrix printer:-This type of printer makes use of pins. The pins strike an ink ribbon. As each pin hits the ink ribbon, a dot appears on the paper, and combinations of dots form characters and illustrations. A dot matrix printer can print 1to 8 page in one minute. Use of dot-matrix printer is now limited to printing invoices and bills.
*Inkjet printer:- This type of printer sprays in a sheet of pager. Ink-jet printers produce high-quality text and graphics. An inkjet can print 4 to 6 pages in one minute. Due to its low price and affordability, the inkjet printer is popular in home specially among kids.
 *Laser printer: - This type of printer uses fine powder ink called toner. Laser printer uses the same technology as photocopy machines. They produce high quality text and graphics prints. The laser printer is quite popular in corporate world and printing houses.

MONITOR
Monitor of a PC works like a television screen. It displays text characters and graphics in colors or in shades of grey.
CLASSIFICATION OF MONITOR
*Dot Pitch:-The dot pitch is a standard to measure how close together the pixels or dots to create an image on the monitor. The finer the dot pitch, the better image quality you will have. The dot pitch is entirely depends upon the type of computer.
*Resolution and Refresh Rate: - The resolution and refresh rate are two combined factors and depend upon each other. They hand-in-hand to produce a clean image, and they both depend on the bandwidth available form your video card.







How to punch RJ-45 connector in UTP cable & Punching Cable in IO Device

How to punch RJ-45 connector in UTP cable

How to punch cable in IO device.

How to configure Trust Relationship in server 2003 & 2008

How to configure Trust Relationship in server 2008.

History of Computer generation's


The history of computer development is often referred to in reference to the different generations ofcomputing devices. A generation refers to the state of improvement in the product development process. This term is also used in the different advancements of new computer technology. With each new generation, the circuitry has gotten smaller and more advanced than the previous generation before it. As a result of the miniaturization, speed, power, and computer memory has proportionally increased. New discoveries are constantly being developed that affect the way we live, work and play.
Each generation of computers is characterized by major technological development that fundamentally changed the way computers operate, resulting in increasingly smaller, cheaper, more powerful and more efficient and reliable devices. Read about each generation and the developments that led to the current devices that we use today.

First Generation - 1940-1956: Vacuum Tubes

The first computers used vacuum tubes for circuitry and magnetic drums for memory, and were often enormous, taking up entire rooms. A magnetic drum,also referred to as drum, is a metal cylinder coated with magnetic iron-oxide material on which data and programs can be stored. Magnetic drums were once use das a primary storage device but have since been implemented as auxiliary storage devices.
The tracks on a magnetic drum are assigned to channels located around the circumference of the drum, forming adjacent circular bands that wind around the drum. A single drum can have up to 200 tracks. As the drum rotates at a speed of up to 3,000 rpm, the device's read/write heads deposit magnetized spots on the drum during the write operation and sense these spots during a read operation. This action is similar to that of a magnetic tape or disk drive.
They were very expensive to operate and in addition to using a great deal of electricity, generated a lot of heat, which was often the cause of malfunctions. First generation computers relied on machine language to perform operations, and they could only solve one problem at a time. Machine languages are the only languages understood by computers. While easily understood by computers, machine languages are almost impossible for humans to use because they consist entirely of numbers. Computer Programmers, therefore, use either high level programming languages or an assembly language programming. An assembly language contains the same instructions as a machine language, but the instructions and variables have names instead of being just numbers.
Programs written in  high level programming languages retranslated into assembly language or machine language by a compiler. Assembly language program retranslated into machine language by a program called an assembler (assembly language compiler).
Every CPU has its own unique machine language. Programs must be rewritten or recompiled, therefore, to run on different types of computers. Input was based onpunch card and paper tapes, and output was displayed on printouts.
The UNIVAC and ENIAC computers are examples of first-generation computing devices. The UNIVAC was the first commercial computer delivered to a business client, the U.S. Census Bureau in 1951.
Acronym for Electronic Numerical Integrator And Computer, the world's first operational electronic digital computer, developed by Army Ordnance to compute World War II ballistic firing tables. The ENIAC, weighing 30 tons, using 200 kilowatts of electric power and consisting of 18,000 vacuum tubes,1,500 relays, and hundreds of thousands of resistors,capacitors, and inductors, was completed in 1945. In addition to ballistics, the ENIAC's field of application included weather prediction, atomic-energy calculations, cosmic-ray studies, thermal ignition,random-number studies, wind-tunnel design, and other scientific uses. The ENIAC soon became obsolete as the need arose for faster computing speeds.

Second Generation - 1956-1963: Transistors

Transistors replaced vacuum tubes and ushered in thesecond generation computer. Transistor is a device composed of semiconductor material that amplifies a signal or opens or closes a circuit. Invented in 1947 at Bell Labs, transistors have become the key ingredient of all digital circuits, including computers. Today's latest microprocessor contains tens of millions of microscopic transistors.
Prior to the invention of transistors, digital circuits were composed of vacuum tubes, which had many disadvantages. They were much larger, required more energy, dissipated more heat, and were more prone to failures. It's safe to say that without the invention of transistors, computing as we know it today would not be possible.
The transistor was invented in 1947 but did not see widespread use in computers until the late 50s. The transistor was far superior to the vacuum tube,allowing computers to become smaller, faster, cheaper,more energy-efficient and more reliable than their first-generation predecessors. Though the transistor still generated a great deal of heat that subjected the computer to damage, it was a vast improvement over the vacuum tube. Second-generation computers still relied on punched cards for input and printouts for output.
Second-generation computers moved from cryptic binary machine language to symbolic, or assembly, languages,which allowed programmers to specify instructions in words. High-level programming languages were also being developed at this time, such as early versions of COBOL and FORTRAN. These were also the first computers that stored their instructions in their memory, which moved from a magnetic drum to magnetic core technology.
The first computers of this generation were developed for the atomic energy industry.

Third Generation - 1964-1971: Integrated Circuits

The development of the integrated circuit was the hallmark of the third generation of computers. Transistors were miniaturized and placed on silicon chips, called semiconductors, which drastically increased the speed and efficiency of computers.
A nonmetallic chemical element in the carbon family of elements. Silicon - atomic symbol "Si" - is the second most abundant element in the earth's crust, surpassed only by oxygen. Silicon does not occur uncombined in nature. Sand and almost all rocks contain silicon combined with oxygen, forming silica. When silicon combines with other elements, such as iron, aluminum or potassium, a silicate is formed. Compounds of silicon also occur in the atmosphere, natural waters,many plants and in the bodies of some animals.
Silicon is the basic material used to make computer chips, transistors, silicon diodes and other electronic circuits and switching devices because its atomic structure makes the element an ideal semiconductor. Silicon is commonly doped, or mixed,with other elements, such as boron, phosphorous and arsenic, to alter its conductive properties.
A chip is a small piece of semi conducting material(usually silicon) on which an integrated circuit is embedded. A typical chip is less than ¼-square inches and can contain millions of electronic components(transistors). Computers consist of many chips placed on electronic boards called printed circuit boards. There are different types of chips. For example, CPU chips (also called microprocessors) contain an entire processing unit, whereas memory chips contain blank memory.
Semiconductor is a material that is neither a good conductor of electricity (like copper) nor a good insulator (like rubber). The most common semiconductor materials are silicon and germanium. These materials are then doped to create an excess or lack of electrons.
Computer chips, both for CPU and memory, are composed of semiconductor materials. Semiconductors make it possible to miniaturize electronic components, such as transistors. Not only does miniaturization mean that the components take up less space, it also means that they are faster and require less energy.
Related Article: History Behind It All
Instead of punched cards and printouts, users interacted with third generation computers through keyboards and monitors and interfaced with an operating system, which allowed the device to run many different applications at one time with a central program that monitored the memory. Computers for the first time became accessible to a mass audience because they were smaller and cheaper than their predecessors.

Fourth Generation - 1971-Present: Microprocessors

The microprocessor brought the fourth generation of computers, as thousands of integrated circuits we rebuilt onto a single silicon chip. A silicon chip that contains a CPU. In the world of personal computers,the terms microprocessor and CPU are used interchangeably. At the heart of all personal computers and most workstations sits a microprocessor. Microprocessors also control the logic of almost all digital devices, from clock radios to fuel-injection systems for automobiles.
Three basic characteristics differentiate microprocessors:
  • Instruction Set: The set of instructions that the microprocessor can execute.
  • Bandwidth: The number of bits processed in a single instruction.
  • Clock Speed: Given in megahertz (MHz), the clock speed determines how many instructions per second the processor can execute.
In both cases, the higher the value, the more powerful the CPU. For example, a 32-bit microprocessor that runs at 50MHz is more powerful than a 16-bitmicroprocessor that runs at 25MHz.
What in the first generation filled an entire room could now fit in the palm of the hand. The Intel 4004chip, developed in 1971, located all the components of the computer - from the central processing unit and memory to input/output controls - on a single chip.
Abbreviation of central processing unit, and pronounced as separate letters. The CPU is the brains of the computer. Sometimes referred to simply as the processor or central processor, the CPU is where most calculations take place. In terms of computing power,the CPU is the most important element of a computer system.
On large machines, CPUs require one or more printed circuit boards. On personal computers and small workstations, the CPU is housed in a single chip called a microprocessor.
Two typical components of a CPU are:
  • The arithmetic logic unit (ALU), which performs arithmetic and logical operations.
  • The control unit, which extracts instructions from memory and decodes and executes them, calling on the ALU when necessary.
In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the Macintosh. Microprocessors also moved out of the realm of desktop computers and into many areas of life as more and more everyday products began to use microprocessors.
As these small computers became more powerful, they could be linked together to form networks, which eventually led to the development of the Internet. Fourth generation computers also saw the development of GUI's, the mouse and handheld devices

Fifth Generation - Present and Beyond: Artificial Intelligence

Fifth generation computing devices, based on artificial intelligence, are still in development,though there are some applications, such as voice recognition, that are being used today.
Artificial Intelligence is the branch of computer science concerned with making computers behave like humans. The term was coined in 1956 by John McCarthy at the Massachusetts Institute of Technology. Artificial intelligence includes:
  • Games Playing: programming computers to play games such as chess and checkers
  • Expert Systems: programming computers to make decisions in real-life situations (for example, some expert systems help doctors diagnose diseases based on symptoms)
  • Natural Language: programming computers to understand natural human languages
  • Neural Networks: Systems that simulate intelligence by attempting to reproduce the types of physical connections that occur in animal brains
  • Robotics: programming computers to see and hear and react to other sensory stimuli
Currently, no computers exhibit full artificial intelligence (that is, are able to simulate human behavior). The greatest advances have occurred in the field of games playing. The best computer chess programs are now capable of beating humans. In May,1997, an IBM super-computer called Deep Blue defeated world chess champion Gary Kasparov in a chess match.
In the area of robotics, computers are now widely used in assembly plants, but they are capable only of very limited tasks. Robots have great difficulty identifying objects based on appearance or feel, and they still move and handle objects clumsily.
Natural-language processing offers the greatest potential rewards because it would allow people to interact with computers without needing any specialized knowledge. You could simply walk up to a computer and talk to it. Unfortunately, programming computers to understand natural languages has proved to be more difficult than originally thought. Some rudimentary translation systems that translate from one human language to another are in existence, but they are not nearly as good as human translators.
There are also voice recognition systems that can convert spoken sounds into written words, but they do not understand what they are writing; they simply take dictation. Even these systems are quite limited -- you must speak slowly and distinctly.
In the early 1980s, expert systems were believed to represent the future of artificial intelligence and of computers in general. To date, however, they have not lived up to expectations. Many expert systems help human experts in such fields as medicine and engineering, but they are very expensive to produce and are helpful only in special situations.
Today, the hottest area of artificial intelligence is neural networks, which are proving successful in an umber of disciplines such as voice recognition and natural-language processing.
There are several programming languages that are known as AI languages because they are used almost exclusively for AI applications. The two most common are LISP and Prolog.
Related Article: Discover Computer History

Voice Recognition

The field of computer science that deals with designing computer systems that can recognize spoken words. Note that voice recognition implies only that the computer can take dictation, not that it understands what is being said. Comprehending human languages falls under a different field of computer science called natural language processing. A number of voice recognition systems are available on the market. The most powerful can recognize thousands of words. However, they generally require an extended training session during which the computer system becomes accustomed to a particular voice and accent.Such systems are said to be speaker dependent.
Many systems also require that the speaker speak slowly and distinctly and separate each word with a short pause. These systems are called discrete speech systems. Recently, great strides have been made in continuous speech systems -- voice recognition systems that allow you to speak naturally. There are now several continuous-speech systems available for personal computers.
Because of their limitations and high cost, voice recognition systems have traditionally been used only in a few specialized situations. For example, such systems are useful in instances when the user is unable to use a keyboard to enter data because his or her hands are occupied or disabled. Instead of typing commands, the user can simply speak into a headset. Increasingly, however, as the cost decreases and performance improves, speech recognition systems are entering the mainstream and are being used as an alternative to keyboards.
The use of parallel processing and superconductors is helping to make artificial intelligence a reality. Parallel processing is the simultaneous use of more than one CPU to execute a program. Ideally, parallel processing makes a program run faster because there are more engines (CPUs) running it. In practice, it is often difficult to divide a program in such a way that separate CPUs can execute different portions without interfering with each other.
Most computers have just one CPU, but some models have several. There are even computers with thousands of CPUs. With single-CPU computers, it is possible to perform parallel processing by connecting the computers in a network. However, this type of parallel processing requires very sophisticated software called distributed processing software.
Note that parallel processing differs from multitasking, in which a single CPU executes several programs at once.
Parallel processing is also called parallel computing.
Quantum computation and molecular and nano-technology will radically change the face of computers in years to come. First proposed in the 1970s, quantum computing relies on quantum physics by taking advantage of certain quantum physics properties of atoms or nuclei that allow them to work together as quantum bits, or qubits, to be the computer's processor and memory. By interacting with each other while being isolated from the external environment,qubits can perform certain calculations exponentially faster than conventional computers.
Qubits do not rely on the traditional binary nature of computing. While traditional computers encode information into bits using binary numbers, either a 0or 1, and can only do calculations on one set of numbers at once, quantum computers encode information as a series of quantum-mechanical states such as spin directions of electrons or polarization orientations of a photon that might represent a 1 or a 0, might represent a combination of the two or might represent a number expressing that the state of the qubit is somewhere between 1 and 0, or a superposition of many different numbers at once. A quantum computer can doan arbitrary reversible classical computation on all the numbers simultaneously, which a binary system cannot do, and also has some ability to produce interference between various different numbers. By doing a computation on many different numbers at once,then interfering the results to get a single answer, a quantum computer has the potential to be much more powerful than a classical computer of the same size.In using only a single processing unit, a quantum computer can naturally perform myriad operations in parallel.
Quantum computing is not well suited for tasks such as word processing and email, but it is ideal for tasks such as cryptography and modeling and indexing very large databases.
Nanotechnology is a field of science whose goal is to control individual atoms and molecules to create computer chips and other devices that are thousands of times smaller than current technologies permit. Current manufacturing processes use lithography to imprint circuits on semiconductor materials. While lithography has improved dramatically over the last two decades -- to the point where some manufacturing plants can produce circuits smaller than one micron(1,000 nanometers) -- it still deals with aggregates of millions of atoms. It is widely believed that lithography is quickly approaching its physical limits. To continue reducing the size of semiconductors, new technologies that juggle individual atoms will be necessary. This is the realm of nanotechnology.
Although research in this field dates back to Richard P. Feynman's classic talk in 1959, the term nanotechnology was first coined by K. Eric Drexler in1986 in the book Engines of Creation.
In the popular press, the term nanotechnology is sometimes used to refer to any sub-micron process,including lithography. Because of this, many scientists are beginning to use the term molecular nanotechnology when talking about true nanotechnology at the molecular level.
The goal of fifth-generation computing is to develop devices that respond to natural language input and are capable of learning and self-organization.
Here natural language means a human language. For example, English, French, and Chinese are natural languages. Computer languages, such as FORTRAN and C,are not.
Probably the single most challenging problem in computer science is to develop computers that can understand natural languages. So far, the complete solution to this problem has proved elusive, although great deal of progress has been made. Fourth-generation languages are the programming languages closest to natural languages.

sixth generation of computers.

generation of computer is a term used to describe the evolution of computing and how technology has adapted the computing industry to a more streamlined, powerful set of highly evolved processors. Each technological breakthrough has made computers smaller, faster and less expensive. In the 1980’s it was rare for even one home to have a computer, now most homes have several. Accessing the Internet was once reserved for college universities, now we can download music from the web onto our cell phones. And we are still changing, making things even smaller and able to perform more complex functions, such as type what we say and beat us at chess.
Artificial intelligence is the realm of programming where devices are enabled with the ability to think and react to the environment around it. The fields of gaming, robotics, voice recognition, and real life simulation all center on perfecting the science of artificial intelligence. The sixth generation of computer differs from previous generations in terms of size, processing speed and the complexity of tasks that computers can now perform. Back in the earliest stages of computing, computers contained vacuum tubes and magnetic drums. They were large, expensive and could only perform one task at a time. They were also prone to malfunctions and had the self-destructive inclination to overheat due to the vast amount of electricity it used and heat it generated.
Artificial Intelligence Playing ChessThe ability to perform many complex tasks at one time was expanded and revolutionized with the introduction of the microprocessor in the early 70’s. Now what took up a whole room could rest gently on a fingertip. Microprocessors were the beginning for a fury of technological advancements that includes computerized cars, appliances and smart phones. Everything has become smarter, faster and smaller. They have also become integrated. With the advent of the microprocessor came the ability to link computers together in a network. The birth of the Internet and all of its wonders are attributed to the birth of the microchip.
All of this has led to the ability to program computers to imitate the ability to think. While no computer or device can truly think on its own (sorry HAL) it is able to simulate many decision making functions that have helped to improve the lives and fun of humans.
AI programmers started with basic algorithms for reasoning and progressed to using probability and economic theories to help create the ability to solve more complex issues. Advances in the field of fuzzy logic have introduced the ability to solve problems where there is no clear answer of right or wrong. While more complex problem solving is possible, the amount of computing power necessary expands exponentially when the solution is difficult. Research is ongoing to find ways to make problem solving more efficient and less cost prohibitive.
In the military, AI has helped enhance simulators so that training soldiers for unexpected problems that arise in different peacekeeping missions around the world. Artificial Intelligence is also prevalent in many of the world’s spy networks to help determine the probability of certain actions occurring in highly volatile parts of the world.
In the automobile industry, robots are used in manufacturing, but AI has crept into the vehicles themselves. From global positioning systems to warning a driver of a potential hazard in front of them, artificial intelligence is present. Some cars can even apply the break or swerve the vehicle out of harm’s way if necessary.
Artificial intelligence has also assisted in the development of voice recognition software (VAR). While unable to understand the words, VAR has allowed people who have disabilities to speak into a microphone and see the words appear on the screen. Although people need to speak slowly and clearly in order to work properly, and it is not 100% accurate, VAR is also now on many hand held devices. Being able to surf the web or send a text has become easier than ever, and more advancements are on the way.
This can be seen in the robot that won on jeopardy and the customer service avatars now seen on websites and as office assistants. One virtual office assistant (HAL’s ancestor perhaps) can hold an actual conversation with the user. As well as remind him to pick up a gallon of milk on the way home.

step2solution@gmail.com
Complete Hardware & Networking solutions