UA engineer designs better error-correction code

(PhysOrg.com) -- One company already has licensed the technology from the UA, and patents are pending to meet growing computer industry demand for the error-correction algorithm developed by Bane Vasic.

The inner workings of computer chips and hard drives are a mystery to most people – which is ironic, considering they work much like human brains.

Error-correcting codes have played a vital role during the last 50 years by ensuring that digital data keeps its integrity within computer communication and storage systems.

The codes programmed into act just like our brains when we try to make sense of something unfamiliar. Like human brains, these chips search for true meanings by constantly looking for errors and correcting them.

"Error correction is present everywhere, in natural as well as in man-made systems," said professor Bane Vasić of the University of Arizona's department of electrical and computer engineering. He uses his Serbian accent, and the way some people occasionally misunderstand what he is saying, as an example.

"As I am talking, your brain corrects the errors caused by my accent," he said. "Our brains correct all kind of errors in speech by analyzing the context and meaning of sentences."

"The error correction-codes that we as engineers build in communications or memory chips are a kind of grammar that computers use to understand data and keep it meaningful."

Now, more than 50 years after discovery of error-correction codes, Vasić has discovered a way to design error-correction decoders with superior performance.

At the heart of modern coding theory is the fact that is used to interpret certain error-correction codes. This artificial intelligence key is called the "belief propagation algorithm."

This algorithm is actually relatively simple. Data bits are continuously being deconstructed and reconstructed, and the algorithm does this at high speed and doesn't make too many errors. This means, for example, that data is saved or transmitted quickly and that virtually no errors are introduced into the data during deconstruction and reconstruction.

But the belief propagation algorithm does have its limitations. When the algorithm is acting on shorter correction codes, performance can abruptly drop through the floor. In fact, this loss of performance is known as the "error floor phenomenon." Vasić describes it as "arguably one of the most important problems in coding theory."

Vasić has discovered how to do error correction that is both simpler and better than belief propagation. "It is like solving Sudoku," Vasić said of his new algorithm.

Imagine that the cells in a Sudoku puzzle represent the transmitted bits that need to be reconstructed. The contents of some bits, or cells, are known, but some are blank. Worse, some are wrong.

"A neat property of this new is that there is no need for a brain or some central intelligence to solve the puzzle, because the cells solve the puzzle collectively," Vasić said. "No individual cell has a global knowledge about the solution, but collectively the cells find the solution by passing messages among one another."

This message passing is like small-town gossip, said Vasić. "Wrong cells are not good neighbors, and we spread virtual gossip to the neighbors of bad neighbors, so that cells learn who is good and bad in the neighborhood."

"Eventually, the bad neighbors see the error of their ways and become good. Everything is in harmony, and it has been achieved simply, without central command."

Vasić said the original belief was that the algorithms used to reconstruct data, or to solve the Sudoku, would have to come up with a solution that was very close to the finished puzzle in order to work. This would have made them too complex, said Vasić.

"We have demonstrated that this original belief is not the case," Vasić said. "In some cases our simple decoder outperforms belief propagation to the extent that errors are reduced by 90 percent."

Vasić described the development of these new algorithms as a hard problem that required years of research using new theoretical tools and knowledge from disciplines such as combinatorics, artificial intelligence, machine learning and statistical mechanics.

"I am lucky to have extremely talented students and collaborators," Vasić said. "This breakthrough is a result of joint work with my students, Shashi Kiran Chilappagari and Shiva Planjery, and my collaborator from France, professor David Declercq."

Vasić said his discovery "opens up a plethora of beautiful theoretical problems." The National Science Foundation agrees, and is funding his research to the tune of $675,000.

Vasic's lab is in the process of patenting these new decoders, which he plans to implement in silicon chips that will be offered to flash memory, optical communications and hard drive companies. "We already have at least three companies interested in this technology," Vasić said. "One has already licensed the technology from the University of Arizona."

The application in fault tolerant systems is also very intriguing," Vasić said. "We believe we can prove that these decoders will work well in high radiation environments such as space exploration and satellite systems, as well as in systems built of unreliable components, such as nano-scale systems."

Citation: UA engineer designs better error-correction code (2010, October 25) retrieved 25 April 2024 from https://phys.org/news/2010-10-ua-error-correction-code.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

First International Conference on Quantum Error Correction

0 shares

Feedback to editors