Twitter Facebook Google Plus Linkedin email Print Pdf


Speeding up gene sequencing using Big Data and HPC

21 July 2014 by Pierre Picard

All the genetic material in an individual’s genome is often compared to an encyclopedia, with chromosomes representing the different volumes and genes are the sentences within these volumes. So genome research manages massive amounts of data, which requires huge computer processing and storage capacity. The answer is biomedical supercomputing. By eliminating the bottlenecks that offer occur in data analysis and storing that data more efficiently, is it  possible to carry out sequencing on a mammoth scale.

“The challenge in genomics is such,” commented Ivo G. Gut, Director of CNAG, “that it cannot be met with traditional computing. The solution is not to increase the number of sequencers. The key lies in the balance between sequencing and HPC. It is not only about increasing sequencing capacity by acquiring new hardware, but designing an appropriate computing infrastructure – from A to Z – with the help of a technology partner that has extensive experience in the field of genomics. It is also essential to choose a flexible infrastructure that can grow without limits, to keep pace with genome projects.”

A key challenge: The full implementation of genomics in public health systems
“The full implementation of genomics in public health systems involves resolving certain specific scientific and technological challenges: these latest are the ones that Bull is determined to meet using its supercomputing capabilities,” said Natalia Jiménez, Senior Health and Life Sciences Adviser in Bull.

More information: read the CNAG&Bull press release

Comments are closed.