When it comes to internet speeds, we’ve long-since consigned the humble kilobit-class connection to the dustbin, so a mathematics-based breakthrough has us wondering if megabit- and even gigabit-level connections will one day sound as quaintly archaic. Researchers at Japan’s Tohoku University have tweaked existing protocols to enable standard fiber-optic cables to carry data at hundreds of terabits per second [Subscription link]. At that speed, full VOD movies could be downloaded almost instantaneously in their hundreds. At the heart of the development is a technique already used in some digital TV tuners and wireless data connections called quadrature amplitude modulation (QAM). One glance at the Wikipedia explanation shows that it’s no easy science, but the basics of QAM in this scenario require a stable wavelength for data transmission. As the radio spectrum provides this, QAM-based methods work fine for some wireless protocols, however the nature of the optical spectrum means this has not been the case for fiber-optic cables … until now. The university team has solved the stability problem using a special laser that makes it feasible to pipe data down a glass fiber using the QAM method at blistering speeds. Although we shouldn’t expect to be choosing from internet connections rated in Tbps anytime soon, the development could one day make us look back on ADSL as fondly as we now do our 56K modems. (Crossposted to Tech.co.uk)
lol well not yet, this type of technology probably wont show up in consumer markets for decades.. id honestly just be happy with 1 gb/s
Japs are really on their game, have to hand it to them, they are a world ahead of the rest of the world when it comes to technology, A FBI connection of 1 GB--In Japan people already have it
/// your hard drive cant even write that fast.. no point // err, not exactly... the newer SOLID-STATE drives are fast enough, but very expensive, at the present.
Internet2 has a land speed record 31 December 2006 * Records Set: IPv6 Single and Multiple Stream * I2-LSR Record: 272,400 terabit-meters per second * Team Members o The University of Tokyo o WIDE Project o NTT Communications o et al. * Network Distance: 30,000 kilometers (effective) * Time: 5 hours * Average throughput: 9.08 gigabits per second * Software Notes: o Linux kernel 2.6.18.5 x86_64 / CentOS 4.4 o Modified iperf for generating TCP packets * Hardware Notes: o Intel Xeon(woodcrest), 3.00GHz dual core (Sender and Receiver server) o SUPERMICRO X7DBE motherboard o 4GB memory o 500GB SATA disk o Chelsio S310E-SR 10Gigabit Ethernet Adapter http://www.internet2.edu/lsr/history.html Show me a SSD that can do 10 000's GB/s in read/write. It's hundreds of terabits per second. Too make it easy for us, 100 terabit is 12.5 terrabyte.
ye i reckon the japs will get it a few years before us cos the uk seem to be the slowest for everything electronic. the fastest broadband in the uk is like 22mb i think whereas the japs can get 1gb. and even if u get 22mb it is upto and u prob get less than that. stupid broadband providers being cheap by giving rubbish contention ratios and its not like the internet is cheap, its the same cost apparently as jap but they get much quicker speeds.
100s' of TereaBytes/s reads/writes are possible with SSDs, just read below on what MS research has achieved with ORDINARY SATA hard drives: NOTE: SSD disks are about 10-100 times faster than SATA hard drives. Performance Considerations Gigabyte per Second Transcontinental Disk-to-Disk File Transfers Peter Kukol, Jim Gray, Microsoft Research 9 July 2004 Abstract: Moving data from CERN to Pasadena at a gigabyte per second using the next generation Internet requires good networking and good disk IO. Ten Gbps Ethernet and OC192 links are in place, so now it is simply a matter of programming. This report describes our preliminary work and measurements in configuring the disk subsystem for this effort. Using 24 SATA disks at each endpoint we are able to locally read and write an NTFS volume is striped across 24 disks at 1.2 GBps. A 32-disk stripe delivers 1.7 GBps. Experiments on higher performance and higher-capacity systems deliver up to 3.5 GBps. Summary: We’ve been working with Cal Tech (Yang Xia, Harvey Newman, et al) and CERN (Sylvain Ravot, et al) to move data between CERN and Pasadena at 1GBps using the Internet rather than sneaker net. Our networking colleagues (Ahmed Talat, Inder Sethi, et. al.) have a good start on using 10 Gbps Ethernet to move 1GBps across the planet (ultralight). We (Kukol and Gray) are working on the first-meter last-meter problem of quickly moving data from disk to NIC and NIC to disk. To do that we need roughly 1.2 GBps of disk I/O bandwidth (a 20% margin allows us some slack.) That translates to about 20 disk drives at the outer band (60GBps/disk) and 34 drives when reading the inner disk zones (36 GBps/disk). Using a Dual Xeon computer with two Highpoint + one 3ware SATA controller and 24 disks we achieved 625 MBps read 534 MBps write. We observe that the Highpoint controllers show good throughput but behave poorly when more than one is present. This was a borrowed system and we did not have much latitude in reconfiguring it and explore this issue further. To get to 1GBps, we built a white-box dual processor Opteron on a Tyan main board which includes one AMD PCI-X Bridge supporting 4 PCI-X slots. We added SuperMicro Marvellbased SATA controllers, as Brent Kelley of AMD reported great performance on these. Each of these controllers reliably delivers about 450 MBps sequential read and write with eight disks attached. The sequential disk read/write bandwidth scales linearly when a second SuperMicro card is added; but, with 3 of these cards and nineteen or more disks the bandwidth plateaus at around 1.05GBps read and 1.10 GBps write, using about 27% of one processor. To get beyond 1 GBps we have been experimenting with the Newisys™ 4300. It supports up to four Opteron processors and includes three AMD-8131 PCI-X Bridges supporting four 64/133 PCI-X slots. We’ve tested the 4300 server with up to 48 disks, and the observed disk bandwidth scales almost linearly up to 32 disks (with 8 disks each on 4 SuperMicro SATA controllers), achieving a speed of 1.3 GBps with 24 disks and 1.7 GBps with 32 disks. To go beyond 32 disks, the slower PCI-X slots had to be used for the additional SATA controllers and bandwidth increased more slowly. The highest throughput (using the file system to a single logical volume) we’ve been able to measure on the Newisys™ server has been around 2.2 GBps. Note that NTFS read the entire article here: http://arxiv.org/ftp/cs/papers/0502/0502009.pdf
Where does it state SSD's are capable to do 100's of terabytes (its supposed to be bits)? BTW by ordinary, 24 disc's slapped together to work as one is by far ordinary and it only does 1.2 GBps
why must a single drive to write super-fast? it can NEVER be done, EVER; writing and read from "physical devices" are very SLOW. To overcome this, we have to use multiple drives/devices. A little background here. An OC192 fibre optics cable can transfer data at 192 Gigabits/s. thats the fastest Internet backbone router speed we are using as of now. To transfer data that fast in TODAYs technology, we use a RAID; it's an array of PARALLEL linked drives whereby data are "written down"/"read from" from all the drives simultaneously ( for example: A single sentence from an essay will be split into pieces and each piece is written down to a different drive, all at the same time in PARALLEL. ). this RAID,therefore, acts like a single drive. thats what is the article I quoted is talking about. A 1,000,000 terrabit/s transfer can be done much the same way as they have done in that article by having more drives. the only limiting factor is the operating system and its filesystem. by the way, 1.2 GB/s is " 1.2X8 " Gigabits per second.
Never say never... "I think there is a world market for maybe five computers." * Thomas Watson, chairman of IBM, 1943 "There is no reason anyone would want a computer in their home." * Ken Olson, president, chairman and founder of Digital Equipment Corp.,1977 Bottom line is the average computer owner don't got that amount of drives slapped together in raid setup. When the drives weight near 1kg/2 pounds it wont be practical to have 20-30 drives inside a PC case, Imagine carrying it somewhere!. I wont go back to the time where the computers weight tones. Because man are impatience and want things to be instantaneously. How are we too fit that many drives in a notebook? =)
If you guys go to speedtest.com, Japan dominates the ranking by far. They have averages of 12Mb/s. North American average is about 4Mb/s. On my desktop, I get 1.9Mb/s but when I open Wi-Fi and plug the cable on my laptop, I get a wooping 9.7Mb/s. It's fast for me PA was loading instantly.
how is that you can achieve that much of a bandwidth increase by opening your wifi....it sounds like that speed of 9.7Mb/s is a combination of your wifi connection to the router... even if we achieve a internet connection capable of 1Tbps common HDD can't even write at that speed. we would need a harddrive to write at 130MB/S and the fastest that a solid state drive can write is 45MB/S i'll just wait till they make faster drives and hope that we can hit 1Gbps internet connection soon