Thursday 17 May 2012

Digital Community Channel Transmitter

CT-101

Digital Community Channel Transmitter
CT-101
The CT-101 Digital Community Channel Transmitter is a new-concept device to transmit community channels for CATV broadcasting. OFDM modulator, ReMux device and Scheduler are equipped with the compact device and a variety of programming of community channels will be realized.
In addition, programs can be transmitted in the form of MPEG2-TS data, saving much time and cost.
OFDM Modulation for ISDB-T Digital Terrestrial Broadcasting. Built-in ReMux Device to Have Multiple HD and SD Community Channels by Setting Bit Rates as Desired. Seven (7) External Inputs for Video and EPG Server and Programs from Satellites. Video Seletor and Scheduler Hardware/Software are Integarted. Edited Programs be Stored in the Video Server and be Transmitted in the Form of MPEG2-TS Data. EPG (Electronic Program Guide) can be Created when Scheduling TV Programs. Function to Limit Dubbing and Copying TV Programs. Monitoring Software to Watch Operations is Available.

CT-101 Block Diagram

Block Diagram *1 AAC-101 is a unit to convert MPEG1, the audio data used with video editor software, into AAC, the one for digital terrestrial broadcasting.
*2 CAC-010 is a unit to convert parallel signals from sattelites into DVB-ASI (serial) signals. Dimensions: 126 x 96 x 30mm

General Specifications

Input portNumber of ports: 7 Interface: DVB-ASI
Output portNumber of ports: 1 Interface: OFDM (5.6MHz)
Modulation system: QPSK / 16QAM / 64QAM
AC lineAC100 V±10% (50/60 Hz) / 100W 以下
Ambience10 to 40 degree C
DimensionsCT-101 main body 480 (W) × 99 (H) × 400 (D) mm
AAC-101 480 (W) × 49 (H) × 370 (D) mm
*Specifications may change to improve without a prior notice.
GUI screens *To be redesigned in English.

■Scheduler

Scheduler

■Configuration

Configuration

■Monitoring

Monitoring

Crusoe Processor

Mobile computing has been the buzzword for quite a long time. Mobile computing devices like laptops, webslates & notebook PCs are becoming common nowadays. The heart of every PC whether a desktop or mobile PC is the microprocessor. Several microprocessors are available in the market for desktop PCs from companies like Intel, AMD, Cyrix etc.The mobile computing market has never had a microprocessor specifically designed for it. The microprocessors used in mobile PCs are optimized versions of the desktop PC microprocessor. Mobile computing makes very different demands on processors than desktop computing, yet up until now, mobile x86 platforms have simply made do with the same old processors originally designed for desktops. Those processors consume lots of power, and they get very hot. When you’re on the go, a power-hungry processor means you have to pay a price: run out of power before you’ve finished, run more slowly and lose application performance, or run through the airport with pounds of extra batteries. A hot processor also needs fans to cool it; making the resulting mobile computer bigger, clunkier and noisier. A newly designed microprocessor with low power consumption will still be rejected by the market if the performance is poor. So any attempt in this regard must have a proper ‘performance-power’ balance to ensure commercial success. A newly designed microprocessor must be fully x86 compatible that is they should run x86 applications just like conventional x86 microprocessors since most of the presently available software’s have been designed to work on x86 platform.
Crusoe is the new microprocessor which has been designed specially for the mobile computing market. It has been designed after considering the above mentioned constraints. This microprocessor was developed by a small Silicon Valley startup company called Transmeta Corp. after five years of secret toil at an expenditure of $100 million. The concept of Crusoe is well understood from the simple sketch of the processor architecture, called ‘amoeba’. In this concept, the x86-architecture is an ill-defined amoeba containing features like segmentation, ASCII arithmetic, variable-length instructions etc. The amoeba explained how a traditional microprocessor was, in their design, to be divided up into hardware and software.
Thus Crusoe was conceptualized as a hybrid microprocessor that is it has a software part and a hardware part with the software layer surrounding the hardware unit. The role of software is to act as an emulator to translate x86 binaries into native code at run time. Crusoe is a 128-bit microprocessor fabricated using the CMOS process. The chip’s design is based on a technique called VLIW to ensure design simplicity and high performance. Besides this it also uses Transmeta’s two patented technologies, namely, Code Morphing Software and Longrun Power Management. It is a highly integrated processor available in different versions for different market segments.
Technology Perspective
The Transmeta designers have decoupled the x86 instruction set architecture (ISA) from the underlying processor hardware, which allows this hardware to be very different from a conventional x86 implementation. For the same reason, the underlying hardware can be changed radically without affecting legacy x86 software: each new CPU design only requires a new version of the Code Morphing software to translate x86 instructions to the new CPU’s native instruction set. For the initial Transmeta products, models TM3120 and TM5400, the hardware designers opted for minimal space and power. By eliminating roughly three quarters of the logic transistors that would be required for an all-hardware design of similar performance, the designers have likewise reduced power requirements and die size. However, future hardware designs can emphasize different factors and accordingly use different implementation techniques. Finally, the Code Morphing software which resides in standard Flash ROMs itself offers opportunities to improve performance without altering the underlying hardware.

Ultra Wide Band Technology

Ultra Wide Band (UWB)  is a revolutionary technology with incomparable potential in terms of throughput, performance and low cost implementation. The uniqueness of UWB is that it transmits across extremely wide bandwidth of several GHz, around a low center frequency, at very low power levels.
UWB is fundamentally different from existing radio frequency technology. For radios today, picture a guy watering his lawn with a garden hose and moving the hose up and down in a smooth vertical motion. You can see a continuous stream of water in an undulating wave. Nearly all radios, cell phones, wireless LANs and so on are like that: a continuous signal that’s overlaid with information by using one of several modulation techniques. Now picture the same guy watering his lawn with a swiveling sprinkler that shoots many, fast, short pulses of water. That’s typically what UWB is like: millions of very short, very fast, precisely timed bursts or pulses of energy, measured in nanoseconds and covering a very wide area. By varying the pulse timing according to a complex code, a pulse can represent either a zero or a one: the basis of digital communications.
UWB is almost two decades old, but is used mainly in limited radar or position-location devices. Only recently has UWB been applied to business communications. It’s a different type of transmission that will lead to low-power, high-bandwidth and relatively simple radios for local- and personal-area network interface cards and access points. At higher power levels in the future, UWB systems could span several miles or more.
Wireless technologies such as 802.11b and short-range Bluetooth radios eventually could be replaced by UWB products that would have a throughput capacity 1,000 times greater than 802.11b (11M bit/sec). Those numbers mean UWB systems have the potential to support many more users, at much higher speeds and lower costs, than current wireless LAN systems. Current UWB devices can transmit data up to 100Mbps, compared to the 1Mbps of Blue-tooth and the 11Mbps of 802.11b. Best of all, it costs a fraction of current technologies such as Blue-tooth, WLANs and Wi-Fi.
ULTRA WIDE BAND
This concept doesn’t stand for a definite standard of wireless communication. This is a method of modulation and data transmission which can entirely change the wireless picture in the near future. The diagram given below demonstrates the basic principle of the UWB:
The UWB is above and the traditional modulation is below which is called here Narrow Band (NB), as opposed to the Ultra Wideband. On the left we can see a signal on the time axis and on the right there is its frequency spectrum, i.e. energy distribution in the frequency band. The most modern standards of data transmission are NB standards – all of them work within a quite narrow frequency band allowing for just small deviations from the base (or carrier) frequency. Below on the right you can see a spectral energy distribution of a typical 802.11b transmitter. It has a very narrow (80 MHz for one channel) dedicated spectral band with the reference frequency of 2.4 GHz. Within this narrow band the transmitter emits a considerable amount of energy necessary for the following reliable reception within the designed range of distance (100 m for the 802.11b). The range is strictly defined by FCC and other regulatory bodies and requires licensing. Data are encoded and transferred using the method of frequency modulation (control of deviation from the base frequency) within the described channel.

Cell-phone working principle

n this lesson we are going to take a brief familiarization of a typical block diagram of a cellphone.
Block Diagram can help us understand the flow of a certain part of a cellphone's circuit.
A Cell-phone handset is basically composed of two sections,
which is RF and Baseband Sections.
RF
RF refers to radio frequency, the mode of communication for wireless technologies of all kinds, including cordless phones, radar, ham radio, GPS, and radio and television broadcasts. RF technology is so much a part of our lives we scarcely notice it for its ubiquity. From baby monitors to cell phones, Bluetooth® to remote control toys, RF waves are all around us. RF waves are electromagnetic waves which propagate at the speed of light, or 186,000 miles per second (300,000 km/s). The frequencies of RF waves, however, are slower than those of visible light, making RF waves invisible to the human eye.


Baseband
In signal processing, baseband describes signals and systems whose range of frequencies is measured from zero to a maximum bandwidth or highest signal frequency. It is sometimes used as a noun for a band of frequencies starting at zero.
In telecommunications, it is the frequency range occupied by a message signal prior to modulation.
It can be considered as a synonym to low-pass.
Baseband is also sometimes used as a general term for part of the physical components of a wireless communications product. Typically, it includes the control circuitry (microprocessor), the power supply, and amplifiers.
A baseband processor is an IC that is mainly used in a mobile phone to process communication functions.

Basically Baseband also composed of to sections which is the Analog and Digital Processing Sections. So, we are going to separate each other for better and easier to understand.
Cell-phone have three different sections which is the following.
I prepare this to be simple and easy instead of using or explaining it with deep technical terms .
In this manner, it is easy for us to understand the concepts and methods of how basically the cellphone works.

Cell-phone have three sections since baseband is differentiated by into two which is the Analog and Digital function while the RF section remains as a whole circuit section.. which is the following cosists.
1. Radio Frequency (RF Section)
2. The Analog Baseband Processor
3. And the Digital Baseband Processor. 



Photobucket

Radio Frequency Processing Section
The RF section is the part of the cell-phone circuit is also known as RF Transceiver.
It is the section that transmit and receive certain frequency to a network and synchronize to other phone.

The RF - A radio section is based on two main Circuits.
1 Transmitter
2 Reciever

A simple mobile phone uses these two circuits to correspond to an other mobile phone. A Transmitter is a circuit or device which is used to transmit radio signals in the air.and a reciever is simply like radios which are used to recieve transmissions(Radiation) which is spread in the air by any transmitter on a specific frequency.
The two way communication is made possible by setting two transmitters and two recievers sycronized in this form that a trasmitter in a cell phone is syncronised with the frequency of other cell phone's recieving frequency same like the transmitter of second cell phone is syncronised with the recieving frequency of first cell phone. So first cell phone transmits its radiation in the air while the other phone listens it and same process is present in the opposit side. so these hand held two cell phones correspond to one another.
the technology used in these days is a little bit different but it is based on the basic theory prescribed before. the today's technology will be discussed in later on.
Photobucket


Analog Baseband Processor
A/D and D/A section
The analog baseband processing section is composed of different types of circuits.
This section  converts and process the analog to digital (A/D) signals and digital to analog signals (D/A).
Control section 
This is the section acts as the controller of the the input and output of any analog and digital signal.
Power Management
A power management section in mobile phones is designed to handle energy matters that is consumed in mobile phones. There are two main sub sections in a single power section.
• Power Distribution and switching section
• Charging Section
A power distribution section is designed to distribute desired Voltages and currenst to the other sections of a phone. this section takes power from a battery (which is figured commonly 3.6 Volts)and in some places it converts or step down to various volts like 2.8 V 1.8V 1.6V etc.while on other place it also
steps up the voltage like 4.8 V. this section is commonly designed around a power IC(and integrated circuit) which is used to distribute and regulate the voltage used in other components.
The Charging section is based on a charging IC which takes power from an external source and gives it to battery to make it again power up when it is exhausted. this section uses convertibly 6.4 V from an external battery charger and regulates it to 5.8V wile giving it to battery. The battery is made charged by this process and it is ready to use for the next session (a battery session is a time which is provided by the manufacturer of a cell phone for standby by condition of a mobile phone or talk condition.)
Audio Codecs Section
This section where analog and digital audio properties being process like the microphone, earpiece speaker headset and ring-tones and also the vibrator circuits.
Photobucket

Digital Baseband Processor
This is the part where All Application being process. Digital Baseband Processor section is used in mobile phones to handle data input and ouput signal like switching, driving applications commands and memory accessing and executing.
These are the parts and sections o a Digital Baseband Circuit were installed.
CPU
CPU( Centeral Processing Unit) The Central Processing Unit (CPU) is responsible for interpreting and executing most of the commands from the users interface. It is often called the "brains" of the microprocessor, central processor, "the brains of the computer"
Flash and Memory Storage Circuits
*RAM( Random Access Memory)
*ROM,Flash(Read Only Memory
Interfaces such as the following were also part on this section:
*Blutooth
*Wi-fi
*Camera
*Screen  Display
*Keypads
*USB
*SIM-Card

Photobucket


Here a typical overview of a block diagram on latest mobile phone designs.

Various mobile phones have different concepts and design on every aspects, but the methods and operational flow are all exactly the same. It differs on how and what certain IC chips and parts they are being used and installed to a certain mobile phone circuitry.

Wideband – OFDM

Orthogonal frequency division multiplexing (OFDM) is a multicarrier transmission technique that has been successfully applied to wide variety of digital communication applications. Although the concept of OFDM has been around for a long time, it has been recently recognized as an excellent method for high speed bi-directional wireless data communication. This technology is used in broad cast systems such as Asymmetric Digital Subscriber Line (ADSL), European Telecommunications standard Institute (ETSI), radio (DAB: Digital Audio broadcasting) and TV (DVB: Digital Video broadcastingTerrestrial) as well as being proposed for wireless LAN standards.
OFDM efficiently squeezes multiple modulated carriers tightly together reducing the required bandwidth but keeping the modulated singles orthogonal so that they do not interface with each other. Any digital modulation technique can be used on separate carriers. The output of the modulated carriers is added together before transmission. At the receiver, the modulated carriers are separated before demodulation.
W- OFDM will allow the deployment of 4 G wireless networks that enable phones to transmit data at rates of up to megabits per second.OFDM segment are according to frequency. It is a technique that divides the spectrum in to a number of equally spaced tones and carriers a portion of a users information on each tone. A tone can be thought of frequency. Each tone is orthogonal to the other. OFDM is also called multi tone modulation.
OFDM can be considered as a multiple access technique, because an individual tone or groups tones can be assigned to different users. Multiple users share a given bandwidth in this manner, yielding the system called OFDMA. Each user can be assigned a predetermined number of tones when they have information to send, or alternatively a user can be assigned a variable number of tones based on the information that they have to send.W-OFDM can overcome problems of high peak-to-average signal amplitude and fading due to multipath affects. W-OFDM enables the implementation of low power multipath RF networks that minimize interference with adjacent networks.
OFDM FOR MOBILE COMMUNICATION
OFDM represents a different system design approach it can be though of as combination of modulation and multiple across schemes that segment a communications channel in such a way that many users share it. Where as TDMA segments are according to time and CDMA segments are according to spreading codes ,OFDM segments are according to frequency. It is a technique that divides the spectrum into a number of equally spaced tones and carries a portion of a users information on each tone. A tone can be thought of a frequency, much in the same way that each key on a pain represents unique frequency. OFDM has a special property that each tone is orthogonal with each other. There will be frequency guard bands b/w frequencies so that they do not interfere with each other. OFDM allows the spectrum of each tone to overlap and because they are orthogonal they donot interfere with each other. This reduces the required spectrum.
OFDM is a modulation technique that enables user data to be modulated onto the tones. The information is a modulated into a tone by adjusting the tones phase amplitude or both. In the most basic form, a tone may be present or disabled to indicate a one or zero bit of information; however, either phase shift keying (PSK) or quadrate amplitude modulation (QAM) is typically employed. An OFDM system takes a data stream and splits it into N parallel data streams each at a rate 1/N of  the original rate. Each stream then mapped to a tone at a unique freq and combined together using the inverse fast Fourier transform (IFFT) to yield the time-domain waveform to be transmitted.
For example, if a 100-tone system were used, a single data stream with a rate of 1 mega bit per second (Mpbs) would be converted into 100 streams of 10 kilobits per second (Kpbs). By creating parallel data streams, the bandwidth of modulation symbol is effectively decreased by a factor of 100. OFDM can also be considered a multiple access technique because an individual tone or groups of tone can be assigned to different users. Multiple users share a given band with in this manner, yielding the system called OFDMA. Each user can be assigned a predetermined number of tones when they have information to send, or alternatively, a user can be assigned a variable number of tones on the amount of information that they have to send.
OFDM can be combined with frequency hopping to create a spread spectrum system, realizing the benefits of frequency diversity and interference averaging property. In frequency hopping spread spectrum system, each users’ set of tones is changed after each time period. By switching frequencies after each symbol time, the losses due to frequency selective fading are minimized.OFDM therefore provides the best of the benefits of TDMA in that users are orthogonal to one another and CDMA-while avoiding the limitations of each including the need for TDMA frequency planning and multiple access interference in the case of CDMA.

Standard Definition Television

Television systems that have a resolution that meets standards but not considered high definition, this is what Standard-definition television or SDTV refers to. This usually refers to digital television, especially while broadcasting at the same (or similar) resolution as analog systems. In ATSC, SDTV can be broadcast in 704 pixels × 480 lines with 16:9 aspect ratio (40:33 rectangular pixel), 704 pixels × 480 lines with 4:3 aspect ratio (10:11 rectangular pixel) or 640 pixels × 480 lines with 4:3 ratio (and square pixels). The refresh rate can be 24, 30 or 60 pictures per second. Digital SDTV in 4:3 aspect ratio has the same form as the regular analogue TV (NTSC, PAL, PAL2, SECAM) excluding the ghosting, snowy images and static noises but however with poor reception one may encounter various other artifacts such as blockiness and stuttering. Though ATSC and ISDB were originally developed for HDTV, they later proved their ability to deliver multiple SD video and audio streams via multiplexing, than to use the entire bit stream for one HD channel. Eventually ATSC, ISDB along with ISDB were the standards used to broadcast digital SDTV

Digital Audio Broadcasting

This  Electronics Engineering Seminar Topic deals with the following:
Digital audio broadcasting, DAB, is the most fundamental advancement in radio technology since that introduction of FM stereo radio. It gives listeners interference — free reception of CD quality sound, easy to use radios, and the potential for wider listening choice through many additional stations and services.
DAB is a reliable multi service digital broadcasting system for reception by mobile, portable and fixed receivers with a simple, non-directional antenna. It can be operated at any frequency from 30 MHz to 3GHz for mobile reception (higher for fixed reception) and may be used on terrestrial, satellite, hybrid (satellite with complementary terrestrial) and cable broadcast networks.
DAB system is a rugged, high spectrum and power efficient sound and data broadcasting system. It uses advanced digital audio compression techniques (MPEG 1 Audio layer II and MPEG 2 Audio Layer II) to achieve a spectrum efficiency equivalent to or higher than that of conventional FM radio. The efficiency of use of spectrum is increased by a special feature called Single. Frequency Network (SFN). A broadcast network can be extended virtually without limit a operating all transmitters on the same radio frequency.
EVOLUTION OF DAB
  • DAB has been under development since 1981 of the Institute Fur Rundfunktechnik (IRT) and since 1987 as part of a European Research Project (EUREKA-147).
  • In 1987 the Eureka-147 consoritium was founded. It’s aim was to develop and define the digital broadcast system, which later became known as DAB.
  • In 1988 the first equipment was assembled for mobile demonstration at the Geneva WARC conference.
  • By 1990, a small number of test receivers was manufactured. They has a size of 120 dm3
  • In 1992, the frequencies of the L and S — band were allocated to DAB on a world wide basis.
  • From mid 1993 the third generation receivers, widely used for test purposes had a size of about 25 dm3, were developed.
  • The fourth generation JESSI DAB based test receivers had a size of about 3 dm3.
1995 the first consumer — type DAB receivers, developed for use in pilot projects, were presented at the IFA in Berlin.
In short 1992 — 1995 — field trial period. 1996 — 1997 — introduction period 98 onwards — terrestrial services in full swing

Wearable Computers

Ever since the development of the ENIGMA (the first digital computer), computers have inspired our imagination. In this period came the World War II code breaking machine designed by Alan Turing, and Von Neuman’s ENIAC which can be called dinosaurs compared to present day PCs. In the earlier days, computers were so huge that it took an entire building, or at least a floor to occupy one. Computers of that era were very slow by today’s standards. In the non-ending struggle to increase computing speed, it was found out that speed of electricity might become a limiting factor in the speed of computation, and so it was a need to lessen the distance that electricity had to travel in order to increase the computing speed. This idea still holds true in modern computing.
By the 1970s, computers grew fast enough to process an average user’s applications. But, they continued to occupy considerable amount of space as they were made of solid blocks of iron. The input was done by means of punch cards, and later came the keyboard, which revolutionalized the market. In 1971 came the 4004, a computer that was finally small in size. The programmability of these systems were quite less. Still, computers had to be plugged directly in to AC outlets, and input and output done by punch cards.  These computers were not built keeping users in mind. In fact, the user had to adjust himself with the computer.
This was the time when wearable computer (wearcomp) was born. In the 1970s, wearcomp challenged the other PCs with its capability to run on batteries. Wearcomps were a new vision of how computing should be done. Wearable computing showed that man and machine were no more separate concepts, but rather a symbiosis. The wearcomps could become a true extension of one’s mind and body.
In the beginning of 1980s, personal computing emerged. IBM’s PC and other cheaper clones spread world-wide like fire. Finally the idea of a small PC on your desktop that costed you quite less became a reality. In the late 1980s PC’s introduced the concept of WIMP (Windows, Icons, Mice & Pointers) to the world which revolutionalised the interface techniques. At the same time, wearables went through a transformation of their own. They were now eyeglass based, with external eyeglass mounts. Though they remained visible to all, wearcomps were developing principles of miniaturization, extension of man’s mind and body, secrecy and personal empowerment. Now, the only thing needed was an environment for them to flourish. People began to realize that wearcomps could be a powerful weapon in the hands of an individual against the machinery.
The 1990s witnessed the launch of laptops. The concept was a huge success as people could carry their PC wherever they go, and use them any time they need. A problem remained still.  They still had to find a workspace to use their laptops since keyboards and mice (or touch-pads) remained.
During all these years of fast transformation, there remained visionaries who struggled to design computers that were extension of one’s personality, computers that would work with your body, computers that will be with you at all times, always at your disposal. In the last two decades, wearcomps grew smaller still. Now you have completely covert systems which would reside inside your average glasses.
One of the prevalent ideas in wearable computing is the concept of mediated reality. Mediated reality refers to encapsulation of the user’s senses by incorporating the computer with the user’s perceptive mechanisms, which are used to process the outside stimuli. For example, one can mediate their vision by applying a computer-controlled camera to enhance it. The primary activity of mediated reality is direct interaction with the computer, which means that computer is “in charge” of processing and presenting the reality to the user. A subset of mediated reality is augmented reality. It differs from the former because interaction with the computer is secondary. The computer must be able to operate in the background, providing enough resources to enhance but not replace the user’s primary experience of reality. Wearable computers have many applications centered around this concept of mediated / augmented reality as well as many other exciting applications centered around the idea of immediate access to information.

WIRELESS HD

WirelessHD is an effort of the consortium led mainly by LG , Matsushita, NEC, Samsung, SiBEAM, Sony and Toshiba to define a standard for the next generation wireless digital network interface specification for wireless high-definition signal transmission for consumer electronics products and they intend to finalize on one standard by spring 2007. The WirelessHD (WiHD) is designed and optimized for wireless display connectivity thereby achieving high-speed rates from 2 Gbit/s to 5 Gbit/s for the CE, PC, and portable device segments in its first generation implementation. This standard aids in uncompressed, digital transmission of HD video and audio signals, making it like wireless HDMI, in theory. data rates as high as 20 Gbit/s (compared to 10.2-Gbit/s for HDMI 1.3)are possible with its core technology, permitting it to scale to higher resolutions, color depth, and range.
The signal will operate on the 60 GHz frequency band which currently requires line of sight between transmitter and receiver and apparently will sport the bandwidth required to support both current and future HD signals. This is far from the real aim of the WiHD, which would be maintain the elegance of the hang-on-the-wall plasmas and LCDs by tucking away the components and wires in a cabinet. The goal for the first line of products will be in-room, point-to-point , non line-of-sight (NLOS) at up to 10 meters. There much work to be done to improve interoperability among devices, and also to expand the capabilities of personal video players, PDAs, and other handheld devices.