Will you “work for bandwidth?” Do you feel the need, for speed? Some people say adding bandwidth is addictive â€“ that the more you have, the more you want â€“ but how much do you really need, how can you get it, and at what point (if ever) will you have too much?
Smart PC buyers don’t buy systems based solely on clock speed, and instead consider many other factors including their application needs. The same is true for networking, so this article addresses the many factors effecting bandwidth requirements, availability and cost.
Bits, Bytes, and Megahertz (MHz)
“Bandwidth” initially described the amount of frequency allocated within a specific band, such as 2.4-2.4835 GHz (or 83.5 MHz available). Today, the term describes network speed as measured in either bits per second or Bytes (characters) per second, but many factors limit the actual throughput to 50% or less or the rated speed.
Most networking products describe their performance in terms of bits per second (bits/sec or bps) since it’s a larger number than Bytes per second (Bytes/sec or Bps). Don’t confuse the two, and notice that bits is usually abbreviated with a lower case “b” while Bytes is abbreviated with a capital “B”. A general rule of thumb is that one character (or Byte) is made up of 10 binary bits (8 bits plus parity and overhead). But many factors get in the way of achieving the rated performance.
Home Networks vs. Access Networks
Readers with home networks already know that the speed of the home network is usually much faster than the access network. Ethernet, for example, is rated at 10Mbps or 100Mbps while the performance of digital cable or DSL service may be less than 1Mbps, and dialup telephone service is something less than 56Kbps.
Hierarchy of Bottlenecks
Network bandwidth is just one part of overall system performance as shown in this hierarchy of potential bottlenecks, which is shown with the fastest components on the top and the slowest on the bottom. Computer applications that rely on any of the lower level components almost never have to wait on any of the faster components above them.
CPU-bound applications â€“ A PC’s central processor and memory are so much faster (measured in GHz and nanoseconds) than the other components that only a few CPU-bound applications that need very little data input and output (from files stored on disks, or transmitted across networks, or keyed in) actually benefit from investments in faster CPU speeds. Examples of CPU- and memory-intensive applications include scientific modeling, high-end gaming, and video editing.
DIY Home Automation at SmartHomeUSA
Disk storage â€“ Have you ever heard your PC’s hard disk making noise, with the read/write head chattering in and out to different parts of the drive? That’s a classic symptom of “thrashing,” where the computer is spending more time swapping contents between real memory (in RAM) and virtual memory (on the hard disk). Disk performance is determined by a combination of average seek time (the time to move the head to a different location), average rotation delay (waiting for a specific sector to come underneath the head), and actual transfer time. This activity is slower than memory speeds, so the best solution to thrashing is to install more RAM, since this eliminates the need to swap to virtual memory, and since a faster disk drive only makes the swapping “problem” happen faster.
Network-intensive applications â€“ When data needs to come across a network, bandwidth is often the bottleneck since network performance is so much slower than disk and CPU performance. Applications that need files from another PC in a home network benefit from fast home networks, and a 100Mbps (or 1Gbps) Ethernet network is better than an 11Mbps wireless network, except that wireless gives you the convenience of mobility. So, there’s a tradeoff.
Internet applications get their data from remote servers and benefit from broadband access networks, where the term “broadband” implies “fast,” or at least faster than dial-up. Internet apps that run on remote application servers and then display the result locally (e.g. stock portfolio tracking and travel reservations) require less network bandwidth than apps that run locally on your PC and download lots of data (e.g. music and video streaming). A 56Kbps dialup connection is not fast enough for most of these streaming applications, so broadband connections let you do more of the fun stuff.
Human interface â€“ Don’t forget to consider keyboard quality and user interface design, which lets you read and understand information more quickly. While this user interface has traditionally been text-based or made up of graphics and images, more and more it includes video, and that shift pushes the bottlenecks up in the hierarchy.
Applications Determine Bandwidth Needs and Costs
Let’s start with applications that require very little bandwidth. Security monitoring applications may need just 19 bytes (7 digit customer number + 2 digit zone + 10 digit date/time stamp) only when an alert happens and for periodic tests of the system. Customers subscribe to a monitoring service because of the value it offers, rather than the bandwidth used. Health monitoring applications tend to have similar bandwidth needs but offer high value, such as saving the life of a loved one.
So, with the above examples in mind, consider which transmission is worth more â€“ a 19-byte security alert or an on-demand movie rental? It costs about $4.95 to rent a DVD movie at Blockbusters, and it would take about 233 hours to download the 4.5GB movie file at 56Kbps. To be competitive, service providers can’t charge for the amount of bandwidth used or even by the amount of time you are connected. They must charge based on the value of their service.
When IBM introduced its PC in 1981 with a 300-baud dial-up modem option, that bandwidth was enough to display text at a rate faster than most people could read, and a mainframe screen of 24 lines of 80 characters (1920 bytes) took just 6.5 seconds to display.
Even today, a 1-page email with basic fonts (20K) takes less than 0.4 sec to transmit with today’s 56Kbps modem. At that speed, a relatively large and complex Excel spreadsheet (500KB) takes less than 9 seconds to transmit (and about 0.5 sec at 1Mbps).
If your applications primarily transmit text and data, then a dial-up modem may provide enough bandwidth for your needs and cost far less than broadband services (cable modem, DSL, or satellite). Even web browsing can be fast at 56Kbps if you turn off graphics, but as seen in the chart, network speed has been driven largely by efforts to improve the user interface with graphics and sound.
Do you own a digital camera? If so, you may want to attach a picture to grandma’s email, but instead of half a second, it now takes two minutes to transmit over a dial-up connection (1.3M pixel images consume about 675KB each). If you send all of your vacation pictures (100 images) to Kodak for printing, the transmit time will take 3-6 hours with a 56Kbps modem but just 11 minutes with a broadband connection of 1Mbps.
Factors Affecting Network Performance
In this section, we’ll look at some of the factors affecting network performance and use an automotive analogy. If you were trying to determine the maximum speed of your car, a good place to go would be the Bonneville Salt Flats. With no speed limits, traffic lights, congestion, curves, potholes, on/off ramps, or distractions, your only limitations are the power of your engine and design of your car. You drive the same way in residential neighborhoods since there are many more factors limiting performance, including safety.
Media Capacity â€“ Just as we can widen the road and add lanes, we can also increase the size of the network pipe by using better quality (read more expensive) media. Since RJ-59 coaxial cabling was designed for cable TV and 50-80 channels, satellite systems with over 200 channels need higher-quality RG-6 cabling. Likewise, the category 1 twisted pair cabling installed in older homes is OK for voice apps but is unsuitable for high-speed data and can slow down your dial-up 56Kbps modem to 28Kbps or slower. So, if you have the option of installing new twisted pair wiring (even around the baseboard in a home office), use enhanced category 5 or 6. They both support 1Gbps Ethernet, so I suggest using this instead of the slightly older and less expensive category 3 or 5, which can only support 10baseT and 100baseT, respectively.
Wireless networking products based on the IEEE 802.11b standard use the globally available 2.4 GHz frequency band and offer mobile performance up to 11Mbps, but products based on the newer 802.11a standard use the wider and less crowded 5 GHz frequency band for performance up to 54Mbps. The tradeoff of higher performance is higher cost (but this will decline over time) and less coverage distance at peak rate (although 802.11a still outperforms .11b at any distance since it starts out much faster).
Utilization â€“ In our automotive analogy, better utilization of highways can move more people to more destinations without increasing speed limits to unsafe levels. We can add extra lanes for more cars; encourage car pools with special lanes; and use busses and other mass transit systems.
In a similar way, multiplexing is a technique that improves the utilization of networks. An example of multiplexing is cable TV. You receive many channels of programming since network operators modulate the signals from each program in its own 6MHz frequency band and then rely on the TV tuner to select a specific channel. While cable companies can send more than 50 channels over their RG-59 cable, satellite companies send over 200 channels in their RF spectrum but then require special RG-6 cabling between the dish antenna and each decoder box since the RG-59 cabling lacks capacity for that much bandwidth.
Consumers want a greater choice of TV programming, and on their own schedule, but broadcasters can’t justify the cost of sending such highly personalized content. Very few people in Texas, for example, want to watch the University of Wisconsin hockey games, and even fewer want to watch the Toyota Celica automatic transmission repair course on video. To accommodate mass personalization, I expect to see a reallocation of RF spectrum within coax cabling and a rethinking of how video content is delivered.
Programming with mass appeal (such as the evening news or the SuperBowl) is ideal for broadcast delivery, and service providers already cache some special interest programming on their servers for narrowcast (pay-per-view and video-on-demand). True video-on-demand goes a step further by letting consumers go directly to the content source (e.g. Toyota’s servers) anytime they want. With the mix of these delivery techniques and the move toward high-definition (HDTV), I question the current model of delivering video in 6MHz channels, where each TV receives all of the content. TVs can only display 1 program at a time (or 2 with picture-in-picture), so I think content selection will move from the TV tuner into a media server, and the use of frequencies will be reallocated on the coax. The coax really only needs to carry 2 programs for each TV, so plenty of bandwidth can be freed up for other purposes, such as Internet-based video-on-demand.
Video Streaming â€“ Today’s broadcast television model is inexpensive and delivers high quality, but you can only watch programs that have been selected for a large demographic. Recorded programming (tape or disk) shares the same low cost and high quality but lets you time shift a program so you can watch it when you want. Recording also gives you access to more content through rentals. Downloaded video can give you access to much more content and let you watch anytime, but quality is limited by picture size, frame rate, compression, etc., and you can’t watch until the download is complete. It can take 233 hours (over a week) to download a two-hour DVD movie (4.7GB) at 56Kbps, or just 13 hours at 1Mbps.
Video streaming is like downloaded video but without having to wait for the download to complete. By using a more efficient (and compute intensive) compress algorithm, the same movie in MPEG-4 format can be streamed at near-DVD quality with as little as 750Kbps.
Resolution â€“ Less bandwidth is needed with less content, so many Internet video streams fill just a quarter or a sixteenth of a screen and transmit fewer frames (10-20 per second instead of the normal 30 for broadcast TV). For still images, a digital camera with fewer pixels (e.g. 1M vs. 4M) will save storage space and improve transmission times, but it also means your photos can’t be printed as large and you don’t have much ability to crop pictures.
Compression â€“ The various compression algorithms, which reformat data to reduce storage and transmission times, range in efficiency from 2:1 for text to 50:1 for some images and video, depending on the specific content. With such reductions, compression has a large impact on bandwidth needs. Once transmitted, many formats can be restored exactly with no loss of content, but other “lossy” formats sacrifice some quality for higher compression ratios. Such is the case with JPEG photos, MP3 audio, and MPEG-4 video.
GIF has become the most common compression format for graphics on the Web since it is a loss-less algorithm, but GIF doesn’t support as many colors as JPEG, which was optimized for photos. Likewise, MPEG-2 video has become the standard for digital movies, but MPEG-4 is becoming the choice for Internet-based video. MPEG-4 consumes lots of CPU power but can automatically adjust display quality to the available bandwidth, so near-VCR quality videos can be transmitted with just 250Kbps, and near-DVD videos require just 750Kbps. That means a cable or DSL provider can offer Internet-based video-on-demand services, and nearly every home network can support them.
High-definition video poses more of a challenge because, without compression, HDTV streams require 300 Mbps, which is far more than most home networks can support. MPEG2 compression cuts that to just over 20 Mbps, but still it’s more than is available from networks based on HomePNA, HomePlug, HomeRF, or 802.11b. Again, the lossy MPEG-4 format can help by supporting near-HDTV quality with only 2.5Mbps, which is easily supported by all of these home network technologies.
Attenuation â€“ Like sound, signal strength diminishes with distance or when going through walls and floors. To compensate, many networks adjust performance downward with distance. It’s like being able to whisper into someone’s ear when you’re close but having to shout when they’re in another room. The impact is most noticeable with wireless networks like 802.11b, where the 11Mbps peek rate can drop below 1Mbps when systems are in rooms separated by two or more walls.
Interference â€“ In our automobile analogy, interference would be like speed bumps, potholes, curves, distractions, and on/off ramps, all of which make it so you can’t safely drive at the posted speed limit. Network cabling uses twists or shielding to minimize the effects of interference, and wireless networks can use techniques like frequency hopping. HomeRF is a notable in its use of frequency hopping to avoid interference, as compared with 802.11b, which lacks that ability since it was designed for offices that don’t have much interference. For more understanding of “Interference Immunity of 2.4 GHz Wireless LANs,” visit https://www.hometoys.com/htinews/aug01/articles/immunity/immunity.htm.
Power Management â€“ Especially with wireless networks growing in popularity and causing RF interference between networks, power management can eliminate much of that interference. The IEEE is working on technologies that sense when devices are near to each other so transmit power can be kept low to extend battery life and minimize interference, but then allow power levels to automatically increase when devices are far apart (like shouting but within regulatory limits).
Contention â€“ Devices operating in Ethernet-based networks compete for network bandwidth, and this contention causes packet collisions and retries. Quality-of-service (QoS) is a method of reserving bandwidth that requires each networked device to be controlled by a central controller, similar to the regulated freeway onramps. QoS can work fine as long as all devices play by the same rules, but it’s difficult to introduce new QoS rules into an existing network, since devices that don’t obey will hurt performance for all others. For more information on “Quality of Service in the Home Networking Model, visit https://www.hometoys.com/htinews/aug01/articles/qos/qos.htm.
Signal Processing â€“ Moore’s Law continues to drive the semiconductor industry, enabling the affordable implementation of advanced protocols that reduce errors and squeeze more bandwidth from the same media. An example is OFDM (orthogonal frequency-division multiplexing), which is used in both powerline and wireless networks. The emerging 802.11g standard, which is promoted as the follow-on to 802.11b, increases maximum performance from 11Mbps to 54Mbps by using OFDM in the same RF spectrum.
Protocol â€“ Moore’s Law also enables the continued development of networking protocols to get more performance from the same media. In some cases, Ethernet can now run at speeds up to 1Gbps over category 5 cabling. The HomePNA standard that initially supported Ethernet protocols over existing phone wires at speeds of only 1Mbps now runs at 32Mbps. HomeRF followed a similar evolution, with early products supporting only 1.6Mbps but newer products supporting 10Mbps with backward compatibility.
Always-on vs. On-demand â€“ If your application doesn’t need instant response time, then lots of data can be transmitted during off-hours using modest bandwidth. Even the vertical blanking interval between TV frames (the black bars that show when your TV’s vertical hold needs adjusting) can be used to transmit data and has enough bandwidth to download a week’s top-10 music titles and electronic program guide overnight. Does grandma really need to receive your email with photo attachment within the next few seconds, or will an hour delay be OK? And does she even have the bandwidth on her end to receive the email that fast.
Mobile vs. Stationary â€“ Expect to give up some performance if you need to be mobile and use wireless networks, but wireless protocols are progressing rapidly. While most Ethernet networks run at 100Mbps, 802.11b supports only 11Mbps; and even though 802.11a will support up to 54Mbps, cat.5 cabling will soon support Gigabit Ethernet.
Latency â€“ Bandwidth and latency are related. Where bandwidth refers to network speed and is often measured by timing how long it takes to transmit a large file, latency measures delays between each transmission and is measured with network tools like ping and trace-route. These network delays may be due to signals passing through various routers and proxies, or it may be due to distance, such as the propagation delays of satellite transmissions. Applications like telephone are more sensitive to latency than to bandwidth, and you can hear the delays when talking over a satellite or voice-over-IP network.
Fast Networks and the Exploding PC
Ethernet 100baseT networks in home offices and 1394-based networks in entertainment centers run at speeds of 100Mbps and 400Mbps – already faster than the internal bus of older PCs. With that speed, many PC functions can explode out of the box and be distributed across the network as thin servers, and we already see examples of print servers, scanner servers, network gateways, hard disk servers, and attached CD-RW and DVD drives.
This networking trend has PCs becoming more appliance-like and consumer electronics equipment becoming more PC-like. It also suggests that the PC business model could change to become more like set-top boxes. In the PC model, consumers buy everything at once: processor, memory, disk, and even software. They then expend by inserting cards into expansion slots, attaching peripherals to I/O ports, and installing additional software on the hard disk.
In the set-top model, individuals assemble CE components starting at any point and then expand as needed. While you might start with a VCR or DVD player, someone else may start with a game console, and you can each expand by attaching other components. This allows an easy transition from analog (VCR) to digital (DVD), and of course you can still keep the old VCR. It also simplifies buying decisions since you don’t have to buy more than you need, and fast home networks can have the same effect on computing.
With optical fiber connections to homes for faster access networks, disk storage can move further out â€“ into remote services where the data can be protected and shared as appropriate. These faster networks will make it easier for you to store your photos in remote photo albums for family members across the country, and you may even want a service to store and manage your financial records so they are protected even if your house burns down. Faster access networks (popular by 2010) will also enable massively parallel grid computing applications where homeowners could lease back their excess PC capacity for use in solving complex computing problems. While this concept is possible today, it’s not yet feasible because the bandwidth of today’s access networks is too slow for moving the required data around.
Applications determine bandwidth needs and costs. Health monitoring, for example, requires very little bandwidth but saves lives and has high value. If your applications only need to send text and data, then dial-up modems can provide enough bandwidth for your needs, and they cost less than broadband. The Internet experience is enriched by added graphics, images, sounds and video, but this requires more bandwidth. Still, Network bandwidth is just one part of overall system performance, and you must also consider the hierarchy of bottlenecks. Just as smart PC buyers don’t buy systems based on clock speed alone and also consider many other factors such as application needs, the same is true for networking. So, consider the many factors effecting bandwidth requirements, availability and cost.
After 30 years at IBM and running a home systems consulting practice, Wayne joined Siemens IC Mobile to help develop home networking strategies and apply cordless telephone technology to HomeRF, thus enabling the integration of data and voice applications. Wayne is a home networking visionary, frequent speaker, and author with a monthly column in HomeToys.com. His vision includes consumers with easy access to services and service providers with equal access to consumers, all without worrying about wiring or incumbent competitors that control the infrastructure. Wayne wrote the market research report, “Information Appliances and Pervasive Net Access”. He serves as the Communications Chairman for the HomeRF Working Group and can be reached at email@example.com .