Regulatory changes raise troubling questions for genomic testing:
By 6 October 2014, many laboratories in the United States must begin honoring new individual data access rights created by recent changes to federal privacy and laboratory regulations. These access rights are more expansive than has been widely understood and pose complex challenges for genomic testing laboratories. This article analyzes regulatory texts and guidances to explore which laboratories are affected. It offers the first published analysis of which parts of the vast trove of data generated during next-generation sequencing will be accessible to patients and research subjects. Persons tested at affected laboratories seemingly will have access, upon request, to uninterpreted gene variant information contained in their stored variant call format, binary alignment/map, and FASTQ files. A defect in the regulations will subject some non-CLIA-regulated research laboratories to these new access requirements unless the Department of Health and Human Services takes swift action to avert this apparently unintended consequence. More broadly, all affected laboratories face a long list of daunting operational, business, compliance, and bioethical issues as they adapt to this change and to the Food and Drug Administration’s recently announced plan to publish draft guidance outlining a new oversight framework for lab-developed tests.
Well, I don’t know. Is it a “defect”? Might be pretty convenient.

RSS



Nature is written by and for researchers, so they want researchers to have special rights. But there are good reasons, like non-CLIA research labs have small volume, so the amortized cost of implementing this is much higher.
It looks like they are afraid that research facilities will be swamped by the requests from the test subjects, without having any $$ cushion on their grants to accommodate the extra work?
Another interesting problem they cite is with the soon-to-be-introduced LDT regulation by the FDA which would effectively ban the labs from even attempting to interpret most of their incidental findings – as you would thing the labs may be ethically compelled to do if they have to release the incidentalome data.
Seems like it’s a matter of providing raw data files, and not the associated interpretation of them. Couldn’t they just charge additionally for interpretation. Doesn’t seem like it would be that much more work for those who presently hold the data if it’s merely sending (or allowing access to) data files.
I’m not seeing how it’s a “defect” either.
Yeah, the concerns in that article sound way overblown to me, but then I’m firmly on the “I want to know everything” side of the genetic testing ethics debate. My only real concern isn’t addressed at all, namely, what does ‘providing access’ entail? These are big honking files, making them available online is not a trivial matter. Even low coverage whole genome data is in the multi-gigabyte range. Sending it through snail mail might be simpler, but it could still be costly if an organization is doing really large scale testing.
And as more and more research universities are generating terabytes of raw, processed — and much junk — genomics data weekly, the institutions are all being forced to figure out what to do with that data. Who will host, curate, and selectively and securely distribute. It’s becoming another bureaucratic nightmare: multi-departmental committees will burn a bunch of wo/manhours 🙂
And when I say “junk” data, I refer in general to the many poorly processed, improperly handled, and incorrectly *labelled* results that result from human error in labs without strict practices.
What I’ve seen of my own genomic data depresses me, anyway.