I will talk about the future of Data Storage and how we gonna face the challenge of storing each and every data that we create nowadays. Our generation is creating an enormous amount of data thanks to cheap internet and smart devices hence we should upgrade our technology.
GGGQEP is in two parts – the GGG and the QEP – and it can be framed, in the most simplified way, as follows:
Gadolinium Gallium Garnet – or GGG in short – has a number of exotic properties that make it the perfect candidate for the next level of massive data storage. Firstly, it is crystalline in nature, and its ingots can easily be grown through Czochralski’s seeding. Secondly, it has a cubic lattice, and hence any 3D laser-writing upon it, and subsequent reads, can be conducted with utmost accuracy. Thirdly, its crystal hardness, as expressed on the Mohs scale, falls somewhere between that of Orthoclase Feldspar, and Topaz – which is neither too hard to economically process in a large scale, nor too soft to risk data compromise due to frequent,and prolonged, read-write cycles. And lastly, GGG has already been used in a related field – the magnetic bubble memory field – as a substrate for the magnetic-optical films. Hence, its magnetic and optical properties, including its refractive index, are well documented.
Now for some elementary, but important, figures. The molecular formula of GGG is Gd3Ga5O12. One mole of GGG (i.e. 6.023 * 10^23 molecules) has a weight of 872.9118 grams which, at a density of 7.08g/cm^3,evaluates to a mole volume of 123.29cm^3. If this was crafted into a perfect cube, the sides would be exactly 4.977095cm long. For the sake of simplicity, this can be rounded off to 5cm. So a GGG cube of precisely 5cm*5cm*5cm would have approximately one mole of the molecules. Through a rather extensive extrapolation, it can be worked out that GGG molecules hold data (in bits) equivalent to about 12.5% of the total number of molecules present. Hence, one mole can hold about 7.52*10^22 bits of data, which can further be evaluated to 8153.20034 exabytes of data. Each exabyte is equivalent to one billion gigabytes. For the sake of perspective, the entireindexed internet, at the moment, has about 700 exabytes of data. So with a single GGG cube measuring 5cm per side, more than 11 internets, at present size, can be comfortably accommodated.
Quantum Electronic Processing – or QEP in short – is an area still fraught with all manner of cognitive and execution hurdles. Cognitive in the sense that the central idea in it is pretty counter-intuitive. And execution in the sense that once the idea is grasped, the implementation still raises some major problems. But there is a promising solution, and hence both the cognitive and the executive hurdles will be addressed here. Quantum processing differs from traditional processing in one major way: it doesn’t just deal with the binary states of 0 and 1, but also deals with a superposition of all states between those two ranges. Effectively, qubits – the particulates of quantum computing – can attain any value between 0 and 1… and they can do so, for all those values, simultaneously. Due to this, quantum processing introduces an inherent parallelism into computing. And this, in turn, makes quantum computing to potentially be several million times faster and more efficient than traditional, transistor based, bit processing.
So much for Moore’s law, it would seem.
Unfortunately, in order to harness the vast power of qubits, one major hurdle has to be overcome in the execution phase. Somehow, the quantum dynamics in such a process have to be observed before they actually occur. Otherwise, any interference with the dynamics in any conventional sense – say by trying to observe them while they are happening – will immediately destroy the quantum superposition parallelism, as all quantum waves collapse into the traditional binary states – either a 0 or a 1. In other words, if the quantum processor is accessed in the conventional way, it behaves exactly like a traditional processor – able to work out only one function at a time. In order for it to operate at its full parallelism potential, no observations of its quantum dynamics can be done while they are occurring, orafter they have occurred.But there is no restriction against observing the dynamics before they’ve occurred – if such can be achieved.
Incredibly, there exists a way of getting around the quantum observation problem. Quantum entanglement. This is a phenomenon that occurs when two similar sub-nuclei particles, such as photons or electrons, interact with each other for an instance, and thereafter attain correlating, but opposite, attributes for such traits as polarization, spin and momentum. For instance, if one entangled particle has a clockwise spin, its entangled partner will have an anticlockwise spin. The interesting thing with such particles is that they retain this correlation despite any distances between them – and communications between them have experimentally been found to be instantaneous. The most recent data from quantum entanglement shows the interactions to be at least ten thousand times faster than the speed of light. So the communication can’t be happening in any classical sense, as this would violate relativity.
Quantum entanglement can be utilized in quantum computing by using one of the entangled pairs for measurements, while leaving the other particle inside the quantum processor. As long as the quantum processor does not interfere with the quantum superposition state of this second particle, measurements from the other entangled particle can consistently, and reliably, give information about the quantum states of the two. The only challenge is to avoid triggering a reverse decoherence of the two particles –which can be achieved by ensuring that all measurements of the entangled particle remains within the dephasing margins they themselves set, or by deploying optical pulsing mechanisms into the system. In this way, the total amount of information being processed per unit time is capped off only by the bounds set in by Holevo’s limitative theorem.
The rationality behind creating the GGGQEP (Gadolinium Gallium Garnet Quantum Electronic Processor) system is simple. The two factors that limit computing capacity most are storage volume and processing speeds. Other factors, such as data transfer rates, are easily dealt with by using such things as optical, instead of electrical, communications pathways. Bigger file sizes are likely to be created as the storage capacities increase, but even now, such file systems as BRTF, XFS, and even the common NTFS can theoretically handle sizes up to one exabyte. The increased computation needed for the user-space can, in turn, be easily handled using quantum algorithms, such as “Simon’s problem”, “Shor’s algorithm”, and several other algorithms found in the “Abelian Hidden Subgroup Problem”.
Through extreme laser engineering at the molecular level for GGG, and a bit of lateral thinking about quantum dynamics, the future of computing seems boundless. Most of the discretionary technologies described here are already operational – some in futuristic prototypes, and others in such facilities as DARPA, NASA and CERN. Whether or not GGG is already in use as a storage media remains “classified”, but from a theoretical perspective, there isn’t anything stopping its use. What remains to be done, therefore, is to combine all the various technologies, and create the next generation of computers. This, judging by current trends, might happen within this lifetime… and computing will change fundamentally, and forever.
An extraordinarily large unit of digital data, one Exabyte (EB) is equal to 1,000 Petabytes or one billion gigabytes (GB). Some technologists have estimated that all the words ever spoken by mankind would be equal to five Exabytes.
|
In a world flooded with data, figuring out where and how to store it efficiently and inexpensively becomes a larger problem every day. One of the most exotic solutions might turn out to be one of the best: archiving information in DNA molecules.
The prevailing long-term cold-storage method, which dates from the 1950s, writes data to pizza-sized reels of magnetic tape. By comparison, DNA storage is potentially less expensive, more energy-efficient and longer lasting. Studies show that DNA properly encapsulated with a salt remains stable for decades at room temperature and should last much longer in the controlled environs of a data center. DNA doesn’t require maintenance, and files stored in DNA are easily copied for negligible cost.
Even better, DNA can archive a staggering amount of information in an almost inconceivably small volume. Consider this: humanity will generate an estimated 33 zettabytes of data by 2025—that’s 3.3 followed by 22 zeroes. DNA storage can squeeze all that information into a ping-pong ball, with room to spare. The 74 million million bytes of information in the Library of Congress could be crammed into a DNA archive the size of a poppy seed—6,000 times over. Split the seed in half, and you could store all of Facebook’s data.
Science fiction? Hardly. DNA storage technology exists today, but to make it viable, researchers have to clear a few daunting technological hurdles around integrating different technologies. As part of a major collaboration to do that work, our team at Los Alamos National Laboratory has developed a key enabling technology for molecular storage. Our software, the Adaptive DNA Storage Codex (ADS Codex), translates data files from the binary language of zeroes and ones that computers understand into the four-letter code biology understands.
ADS Codex is a key part of the Intelligence Advanced Research Projects Activity (IARPA) Molecular Information Storage (MIST) program. MIST seeks to bring cheaper, bigger, longer-lasting storage to big-data operations in government and the private sector, with a short-term goal of writing one terabyte—a trillion bytes—and reading 10 terabytes within 24 hours at a cost of $1,000.
FROM COMPUTER CODE TO GENETIC CODE
When most people think of DNA, they think of life, not computers. But DNA is itself a four-letter code for passing along information about an organism. DNA molecules are made from four types of bases, or nucleotides, each identified by a letter: adenine (A), thymine (T), guanine (G) and cytosine (C). They are the basis of all DNA code, providing the instruction manual for building every living thing on earth.
A fairly well-understood technology, DNA synthesis has been widely used in medicine, pharmaceuticals and biofuel development, to name just a few applications. The technique organizes the bases into various arrangements indicated by specific sequences of A, C, G and T. These bases wrap in a twisted chain around each other—the familiar double helix—to form the molecule. The arrangement of these letters into sequences creates a code that tells an organism how to form.
The complete set of DNA molecules makes up the genome—the blueprint of your body. By synthesizing DNA molecules—making them from scratch—researchers have found they can specify, or write, long strings of the letters A, C, G and T and then read those sequences back. The process is analogous to how a computer stores binary information. From there, it was a short conceptual step to encoding a binary computer file into a molecule
The method has been proven to work, but reading and writing the DNA-encoded files currently takes a long time. Appending a single base to DNA takes about one second. Writing an archive file at this rate could take decades, but research is developing faster methods, including massively parallel operations that write to many molecules at once.
NOTHING LOST IN TRANSLATION
ADS Codex tells exactly how to translate the zeros and ones into sequences of four letter-combinations of A, C, G and T. The Codex also handles the decoding back into binary. DNA can be synthesized by several methods, and ADS Codex can accommodate them all.
Unfortunately, compared to traditional digital systems, the error rates while writing to molecular storage with DNA synthesis are very high. These errors arise from a different source than they do in the digital world, making them trickier to correct. On a digital hard disk, binary errors occur when a zero flips to a one, or vice versa. With DNA, the problems come from insertion and deletion errors. For instance, you might be writing A-C-G-T, but sometimes you try to write A, and nothing appears, so the sequence of letters shifts to the left, or it types AAA.
Normal error correction codes don’t work well with that kind of problem, so ADS Codex adds error detection codes that validate the data. When the software converts the data back to binary, it tests to see that the codes match. If they don’t, it removes or adds bases—letters—until the verification succeeds.
SMART SCALE-UP
We have completed version 1.0 of ADS Codex, and late this year we plan to use it to evaluate the storage and retrieval systems developed by the other MIST teams. The work fits well with Los Alamos’ history of pioneering new developments in computing as part of our national security mission. Since the 1940s, as an outcome of those computing advancements, we have amassed some of the oldest and largest stores of digital-only data. It still has tremendous value. Because we keep data forever, we’ve been at the tip of the spear for a long time when it comes to finding a cold-storage solution, but we’re not alone.
All the world’s data—all your digital photos and tweets; all the records of the global financial sector; all those satellite images of cropland, troop movements and glacial melting; all the simulations underlying so much of modern science; and so much more—have to go somewhere. The “cloud” isn’t a cloud at all. It is digital data centers in huge warehouses consuming vast amounts of electricity to store (and keep cool) trillions of millions of bytes. Costing billions of dollars to build, power and run, these data centers may struggle to remain viable as the need for data storage continues to grow exponentially.
DNA shows great promise for sating the world’s voracious appetite for data storage. The technology requires new tools and new ways of applying familiar ones. But don’t be surprised if one day the world’s most valuable archives find a new home in a poppy-seed-sized collection of molecules.
Funding for ADS Codex was provided by the Intelligence Advanced Research Projects Activity (IARPA), a research agency within the Office of the Director of National Intelligence.

.jpg)
0 Comments
If your having doubt please let me know