Ugo Rovigatti1
Department of Experimental and Clinical Medicine, University of Florence, Florence, Italy.
Corresponding Author: Ugo Rovigatti, Department of Experimental and Clinical Medicine, University of Florence, Florence, Italy, Tel: +0039-389 5608777; E-Mail: [email protected]
Received Date: 12 Jan 2016 Accepted Date: 19 Jan 2016 Published Date: 23 Jan 2016
Copyright © 2016 Ugo R
Citation: Ugo R. (2016). The Long Journey of Cancer Modeling: Ubi Sumus? Quo Vadimus? Mathews J Cancer Sci. 1(1): 001.
KEYWORDS
NGS Technologies; Cancer Modeling; Cancer Genetics (CAN-GEN); Cancer Upstream (UP-CAN); Genome-Snipers.
INTRODUCTION
In a recent review article, an analysis began of the current status of research in the so-called Next Generation Sequencing Era (NGSE) [1]. There are several reasons for this efforts and three will be summarized in this Mini-Review: 1. The technological advances leading to ever faster and less expensive sequencing methods for cancer patients [2-4]; 2. The logical conclusion that we are reaching an "end-of-the-road" situation in our understanding of the molecular basis of cancer [5, 6]; and 3. The need to rationalize and enormously growing field and to identify some priorities for future successful interventions [1, 7-9].
1. Technological Advances:
These are witnessed even in the short span of a few months [1]. Although the leading and most utilized Illumina and IonTorrent technologies maintain their prime role on the screen, great progresses are also witnessed for the emerging nanopore technology [10-14]. As previously mentioned, Oxford Nanopore Technologies (ONT) has recently launched the Mini-Ion system for rapid, easy and long-range sequencing [15]. For sequence quality, a number of members of Mini-Ion Assess Program (MAP) started evaluating the instrumentation outputs in May 2014 [1, 14]. ONT still requires several optimizations as previously indicated [1, 16]. One of the major problems, still unresolved, of NP technology is the presence of intrinsically high error-rate. This is generally evaluated in the order of 30%, while different assessment have spanned between 5% and 40% [14, 17]. Although this problem is being addressed and reasons beginning to be understood – probably due to some unspecific binding, oscillation of the nucleic acid at the pore entrance and in binding a-haemolysin protein as well as nano-amperage reading pattern ambiguity [1] –, high error rate is hampering direct utilization of NP technology for genome sequencing. While alternative solutions are being considered with MspA protein, which may be more efficient and specific [18-20], most of today's methods rely on: A. Correcting algorithms and soft-wares and B. Parallel readings with the state of the art technology (Illumina or Ion Torrent).
Bioinformatic tools convert the amperage change in nucleotide sequence (basecalling): standard ONT software allows double readings (2 directional) into FASTA5 format [21], then extracted by programs such as PORETOOLS or PoRe into FASTA [22, 23]. The problem of alignment has been typically addressed by programs such as LAST, BLASR, BWA-MEM and margin-Align. [24-27]. The question of monitoring the readouts and alignments is essential with such a high error rate and tools are becoming available: the minoTour and most recently the Nano-OK , which allows alignment based quality control and estimate of error-rate, as well as the Nanocorr algorithm, which specifically corrects the NP readouts [14, 28, 29]. 2. Parallel readings however still seems to be a pursued strategy, in order to obtain meaningful sequences. This has become standard practice both for error correction and for assembling genomic sequences. Recently, the group of McCombie at CSHL has tested the MinION ONT platform for sequencing and assembling the Saccharomyces Cerevisiae genome with parallel sequencing performed with MiSeq (Illumina) [14]. Only by performing correction with the previously mentioned Nanocorr soft-ware, were they able of obtaining –by comparison with MiSeq shorter sequences- a complete and accurate assembly of yeast genome. A similar analysis was also performed with data (sequences) from E. coli [14, 21]. Generally speaking, the NP technology reads are much longer than with Illumina/Ion Torrent and so are the contigs (678 kb versus 59.9 Kb, i.e. approximately 10X magnification). This is certainly one of its most important qualities (once the error issue will be solved), essential for efficient sequencing of novel genomes [14]. This very short photogram of NP technology at 2016 incipit can just give an idea of the fast-pace of this evolving field. It is however foreseeable that we will have much more efficient and less expensive technologies –already approaching the $1000 human genome goal of G. Church- in the years/months to come [1, 30]. The next and real questions are: how far do have to keep improving sequencing for understanding cancer cell? Are we moving in the right direction, or better: today's cancer genomics has only one possible explanation?
2. End-Of-The-Road" (Eor)
For the second question, how far can we reasonably keep searching before reaching the so-called "end-of-theroad" (EOR), even without crystal balls some reasonable consideration can be made [1]. Searching for the "cancergenome" is reminiscent of what happened in the 50'-60', when molecular biologists were searching "for the gene". Then, great Pioneers such as Jonathan Beckwith, James Shapiro, Saymour Benzer and many others were capitalizing from previous work of Morgan, McClintock, Beadle, Tatum, Lederberg, Watson, Crick, Jacob, Monod and others for finally identifying the entity molecular biologists considered their Saint-Graal: "the gene" [31]. However, it became immediately clear already from the work of S. Benzer that the end-of-the-road was going to be reached soon [32]. Benzer unequivocally demonstrated in his study of the RII region of phage T4 -already at the end of the 50's- that the gene had a defined structure, clearly identifiable by thousands of recombination events [32, 33]. The first gene, the Lac Operon finally isolated and visualized for the first time by the Harvard team of Beckwith and Shapiro [34], was already clearly delineated in the experiments of Benzer over 10 years earlier [32]. Recombination (and later complementation) had delineated an inescapable path toward definition of gene structure. Or to put it differently, the genetic analysis could not proceed any further or to a finer level than what Benzer had done [32-34]. Similarly today, NGS analysis is bringing us to another end-of-road (EOR). Becoming capable of analyzing the entire genome of theoretically any cancer cell will lead us to the full understanding of cancer cells ? Genetically, certainly yes: there is not additional or more sophisticated analysis that we can do. Yet, the answer(s) for cancer understanding may be different from what expected [1, 5]. For some years now, the paradigm "cancer is genetic" has dominated the research field. Unquestionably, the seminal paper by Hanahan and Weinberg on Hallmarks of Cancer (HoC) at the end of last Century (and Millennium) has paved the way for a robust compendium of cancer hallmarks with genetic basis (as reiterated by the same authors in 2011 and by the voluminous treatise by Weinberg in 2014) [35- 37]. Historical and logical needs for such a synthesis under a genetic umbrella are also unquestionable and will probably become object of future or epistemological studies. But, with the clock ticking toward the EOR's inevitable discoveries, the distinguo's started appearing and are growing. Cancer maybe is not or not just genetic. The first objections became from the field of epigenetics (S. Baylin, P. Johnes) and cytogenetics (P. Duesberg, G. Hen) [38-40]. Obviously, cancer cells often display also epigenetic and chromosomal hallmarks. Although the 2011 and 2014 version of HoC include clear examples of chromosomal or epigenetic derangements in cancer cells, the proposed picture privileges genetic alterations, which eventually impinge into the machinery regulating epigenesis and epigenetic marks, chromosomal segregation and structures, etc. [36, 37].
3. Rationalize And Identify Some Priorities For Future Successful Interventions
Are we, therefore, asking the right question(s)? In recent months, a paper in Science by Tomasetti and Vogelstein has stressed this enigma to the limit by showing a randomness in cancer hazard (incidence) [41]. Needless to say, this has stimulated strong opposition from cancer research areas working on environmental carcinogenesis, an important field started from K. Yamagiwa almost 100 years ago [42]. The Science paper has been misunderstood quite often by mass-media, TV etc., as pointed out in the clear analysis of L. Luzzatto in NEJM a few months ago, to which I refer for further clarifications [43]. Still, the emerging question is the one of causality (or lack-of as per Tomasetti and Vogelstein). Specific causality is clearly denied, if we pretend to know with certainty what cancer is, what I called the engine of cancer (TEOC). If we are totally sure that TEOC is somatic mutations accrued during life-time (much more rarely by inheritance), then cancer can have a random component as Tomasetti and Vogelstein have clearly shown [41]. The real question becomes the nature of TEOCs. To rephrase a well-known quote: "DOES THE DEVIL PLAY DICE?" I have already indicated friends and foes of such theory, but the logic tells us that we should probably look better –as for the HoC paradigm- at TEOCs: their origins and their evolutionary mechanisms. As previously indicated [1], simple reading of today's literature suggests that more mechanisms than just somatic mutations are proposed, are suggested or are believed to be at the origin of TEOC: at least 9 additional are summarized and discussed [1]. Another consideration (only marginally discussed in [1] and which I am expanding elsewhere), is that according to HoC and consequently in the great majority of Targeted Gene Therapy (TGT) approaches, the postulated underlying mechanism is one of "oncogene addiction" [44-46]. However, oncogene-addiction has never been clearly defined, particularly for its ontogeny and the failure of most TGT may be also linked to ambiguity of such concept (or misconcept) [1]. Modelling in Cancer Research and Biology in general appears to be much more slow-moving than in other scientific arena's: think about nuclear physics or astrophysics for a comparison [47]. This phenomenon was also discussed by Leslie Orgel in Nature [48]. In our cancer genetic-paradigm today, the model maintained for over 15 years is strongly a-symmetrical. As suggested by Vogelstein and Fearon 19 years ago, the yinyang forces of Oncogenes and TSGs should be complemented upstream only by so-called Caretakers with a mechanism resembling that of TSGs, but earlier in ontogeny (see Figure 1A) [49].
Figure 1A:Mechanism of Tumor Suppressor Genes and Oncogenes.
This mechanism is totally asymmetrical in proposing that upstream in cancer ontogeny only positive, beneficial, i.e. repair genes or Caretakers can be operating: this great logical gap is represented in Figure 1A as a Black upper right Box. However, as I have recently proposed, logics and experimental evidence suggest that also negative, pathological, genedeleterious mechanisms may be operating early in ontogeny, upstream not only of Oncogenes but also TSGs [5]. These genetic factors were called Genome-Snipers, to indicate the active nature of such mechanisms (see scheme in Figure 1B) [5].
Figure 1b:Dominant mechanism of Genome Snipers on Tumor Suppressive Genes.
Older work and new evidence have pinpointed the potential relevance of such Genome-Sniper mechanisms, the main ones being:
CONCLUSION
The concluding remarks want to suggest that in an era in which NGS applications to cancer cells will become pervasive, it will be essential to also focus on data interpretation and not just on their accrual. Even the Hubble telescope would limit its analysis to a scanty number of galaxies, if fixed at a single angle of the universe. Today's analysis cannot be restricted to the idea that somatic mutations "must" be the only causes of human cancer, it has to become more comprehensive and should finally provide explicatory mechanisms for the plethora of additional phenomenology and models emerging in cancer research [1, 7, 59-63].
REFERENCES