Evolving Medical Education for a Digital Future


Three major technology trends—mobile phone–enabled platforms, big data, and artificial intelligence (AI)—exemplify how new technologies are transforming conventional modes of healthcare delivery. Mobile applications are replacing activities previously requiring in-person visits, computers are using vast new data streams to personalize treatment approaches, and AI is augmenting disease diagnosis.

Physicians have an important role in deciding where and how these new tools might be best utilized in diagnosing, treating, and managing health conditions. As medicine undergoes a “digital transformation,” a foundational review of medical education spanning medical school, residency, and continuing medical education (CME) is needed to ensure that physicians at all stages of practice are equipped to integrate emerging technologies into their daily practice. By evolving medical education today, we can prepare physicians for medicine’s digital future.

Computers algorithmically diagnosing diabetes from retinal scans[1]; chatbots providing automated mental health counseling[2]; smartphone applications using activity, location, and social data to help patients achieve lifestyle changes[3]; mobile applications delivering surgical follow-up care[4]; and smartwatches passively detecting atrial fibrillation[5] are just a few examples in which technology is being used to augment conventional modes of healthcare delivery.

Many proposals to evolve medical training in a world of continuous technology transformation have focused on specific technologies, such as incorporating telemedicine into existing Accreditation Council for Graduate Medical Education (ACGME) competencies,[6] creating a new specialty of “medical virtualists,”[7] or better integrating data science into healthcare.[8]

Emerging Technologies Transforming Medicine

Looking beyond legacy health information technology platforms like electronic health records (EHRs), active venture capital funding provides a vision for where the community is placing its bets for emerging technologies. We highlight three areas drawing significant investor interest: mobile health ($1.3 billion raised in 2016),[9] big data enabling precision medicine ($679 million),[10] and AI ($794 million).[11]

Mobile health. In a 2015 national survey, 58.2% of smartphone owners reported having downloaded a health-related mobile application[12] from an estimated 259,000 available health-related applications.[13] These applications frequently help patients self-manage their health conditions by providing education, tracking tools, and community support between clinic visits.

Big data enabling precision medicine. Phone-based sensors, wearable devices, social media, EHRs, and genomics are just a few of the many new technologies collecting and transmitting clinical, environmental, and behavioral information. These new contextual data streams are facilitating personalized medical decision-making with treatments tailored to each individual patient.

AI. New computational methods such as AI, machine learning, and neural networks are augmenting clinical decisions via algorithmic interpretation of huge data sets that exceed human cognitive capacities. These new computational technologies hold great potential to assist with diagnosis (interpretation of ECGs, radiology, pathology), personalized treatment (tailoring treatment regimens for individual tumor genotypes), and population health (risk prediction and stratification), though for now they remain software innovations reliant on human clinician hardware to guide appropriate use.[14]

Knowledge Domains

Physicians have an important role in deciding where and how new tools might be best utilized in diagnosing, treating, and managing health conditions. A recent study by the American Medical Association (AMA) found significant physician interest in digital health tools, with 85% of physicians reporting that they perceived at least some benefit from new digital tools in improving their ability to care for patients.[15]

Integrating emerging technologies such as mobile applications, big data, and AI into regular practice will require providers to acquire new knowledge across ACGME educational domains such as Professionalism, Interpersonal & Communication Skills, and Systems-Based Practice.

From a foundational perspective, it is important that physicians understand their role and potential liability as related to these new technologies. This includes but is not limited to:

  • Understanding relevant laws, particularly state-based regulations concerning remote practice of medicine (ie, telemedicine). (Systems-Based Practice)
  • Compliance with HIPAA and other key privacy regulations when interacting with patient-generated data outside the bounds of the EHR. (Systems-Based Practice)
  • Evaluating potential malpractice implications, including assessing coverage scope. (Systems-Based Practice)
  • Awareness of emerging reimbursement codes for time allocated to new technology–enabled practice models. (Systems-Based Practice)

Outstanding questions remain regarding the clinical efficacy of many new technologies. With formal clinical trials still underway, physicians may feel unable to speak definitively regarding a specific technology’s potential risks and benefits. Yet, the increasingly broad use of these tools requires that physicians use their clinical expertise to help their patients understand the limitations of such technologies and steer them toward appropriate tools. Essential skills and roles that modern physicians must now adopt include:

  • Teaching patients how to identify trusted tools—those using evidence-based guidelines or created in conjunction with credible physicians, scientists, and hospitals (Medical Knowledge)
  • Setting clear expectations upfront about the extent of physician involvement in reviewing patient-generated data (particularly if there is no anticipated involvement) (Patient Care)
  • Assessing technology literacy in the social history and adapting patient education on the basis of digital attainment, including recommending websites, online video, and mobile apps when appropriate (Patient Care)
  • Advancing clinical knowledge by referring select patients to enroll in digital remote clinical trials (Systems-Based Practice)

Taking into account the rapidly increasing amounts of data inputs in clinical decisions, physicians must augment their statistical knowledge to become generally familiar with new data science methods:

  • Leverage data science tools such as visualization to more efficiently review large amounts of patient data, including identification of outliers and trends (Medical Knowledge)
  • Seek to understand the inputs and assumptions of advanced computational algorithms and not allow them to become a black box. Recognize that although deep-learning algorithms can deduce important patterns and relationships, physicians remain necessary as a critical lens in deciding how to apply findings to each individual patient. (Patient Care)

Implications for Physicians

For current medical students and trainees, many of whom are digital natives themselves, the educational domains outlined above may seem intuitive or obvious. In contrast, physicians currently practicing today are already burdened with countless administrative tasks that may make these future technologies feel overwhelming or irrelevant. Yet, the frustration and feelings of burnout that many physicians have as related to the use of EHRs exactly illustrates why it is critically important that physicians engage early in the dissemination of new technologies.

The first step for providers in the digital transformation of medicine is awareness. While providers may not be aware of, or are dismissive of, new technologies, these tools are already being used avidly by millions of patients around the world.[11] Mobile health, big data, and AI soon will become an integral part of medicine, much like EHRs (and stethoscopes).

The second step is for physicians to familiarize themselves with general categories of new digital tools. New journals such as the Journal of Medical Internet Research offer peer-reviewed manuscripts focused on “eHealth and healthcare in the Internet age.” Physicians may benefit from downloading and signing up for test accounts of new applications or connected health devices. Organizations should consider allowing physicians to spend a portion of their CME budgets on such “digital transformation” learning activities.

The third step is for physician leadership organizations to work with regulatory agencies like the FDA to help identify the most robust tools for physicians to adopt and recommend. A positive example of this is the AMA’s recently issued guidelines on the appropriate use of digital medical devices.[16]

By evolving medical education today, we can prepare physicians for medicine’s digital future. In the face of complex and rapid change, we may all be trainees in a world of ever-accelerating technological evolution.

How can we use big data to improve patient outcomes?


With the volume of available data skyrocketing across all industries, the need for accurate analysis is greater than ever. But computers and algorithms can only do so much. You need humans for real insight

Stethoscope on a laptop

Electronic records, health insurance claims, real-time monitoring by connected mobile devices. More medical information is being generated and gathered today than ever before. It’s a trend that’s set to continue.

With aging populations putting increasing pressure on health services, and healthcare costs rising worldwide, working out how to make healthcare more efficient is also an increasingly pressing concern.

The question is how to use this data to drive the most effective healthcare outcomes. We need to move beyond the big data hype to big data results.

There are many regulatory and technological challenges to overcome to maximize the benefit of medical data analytics, from incompatible systems to privacy and intellectual property concerns. And if you come to the wrong conclusions, it truly can be a matter of life or death.

Demonstrating the real-world value of healthcare analytics on improving both efficiency and patient outcomes is vital to win over governments, healthcare organizations and individual patients.

When analytics is not enough

Although artificial intelligence may be improving, algorithms can only take us so far. Strings of code may be able to help us identify trends in data but they are still not smart enough to derive true insight on their own.

Perhaps the best-known example is Google Flu Trends. When it first appeared, its ability to spot flu outbreaks weeks ahead of traditional methods, by analyzing search engine queries about symptoms, was hailed as a breakthrough.

For the 2013 flu season, the tool’s predictions were widely off the mark. Why? The algorithm didn’t distinguish between someone who had symptoms and someone who was merely asking about symptoms. With such a vast amount of data, the volume of false data became so great as to render the findings almost meaningless.

Human-led useful algorithms

The idea behind Google Flu Trends was a good one, but its correlation-based approach led to analytical insights that were too simplistic.

Today, experiences like this are helping to improve medical analytics.

The key is to acknowledge that access to data and clever code is not enough. To develop truly useful algorithms, and derive genuine, actionable insight from their analytics, you need human beings with genuine expertise in the field.

By bringing together people with deep medical expertise with data scientists and coders, today’s medical analytics are being developed not only with details of symptoms, but also the vital broader context. This includes disease progression, drug interactions and even how diseases may interact with other disorders, diseases and lifestyle factors.

What’s more, human analysts will continue to be essential in sanity-checking machine findings, spotting spurious results before they lead to misinformed decisions. It’s not enough to get analysts in only at the start – they need to be there at the end, to continually monitor results. Not only does this make certain the analytics are providing genuine insight, but also that they can be adapted to evolve in the face of new data and changing conditions.

As a result, both people and technology are becoming increasingly able to extract true insight from the vast array of information at their command. While machines aid in the data-crunching, and can help highlight potential trends, humans are and will continue to prove vital to understanding what the results of these analytics truly mean.

Big Sequencing Beclouds Big Data


  • According to Illumina, next-generation sequencing data volume has doubled every year since 2007, representing over a 1,000-fold increase in the amount of data that needs to be processed. Add in proteomics, metabolomics, medical records, and other information, and it is obvious that Big Data is growing at an explosive rate.

    But without the proper computational and informaticstools more data won’t necessarily amount to more, or better, information. Ongoing governmental initiatives and commercial advances are shaping the way the scientific community addresses this challenge. Bioinformatics leaders recently convened at CHI’s Bioinformatics for Big Data, Converting Data into Information and Knowledge conference to discuss progress in the field.

    The National Center for Multiscale Modeling of Biological Systems (MMBioS)—a collaborative effort between the University of Pittsburgh, Carnegie Mellon University, the Pittsburgh Supercomputing Center, and the Salk Institute for Biological Studies—was established in 2012, in the first round of Biomedical Technology Research Resources (BTRRs). Today, 35 NIH-funded BTRRs create and apply unique technology and methods in their respective fields while facilitating the research of NIH-funded laboratories.

    The MMBioS Resource focuses on neurobiological as well as immunological applications. To gain deeper mechanistic understandings, MMBioS develops multiscale simulations to bridge molecular events and disease and organ functions. In particular, MMBioS sustains technology development projects in molecular modeling, cell modeling, and image processing.

    These technology efforts are guided by biomedical projects that MMBioS conducts with research groups across the country. These projects focus on glutamate transport, synaptic signaling, dopamine transporter function, T-cell signaling, and neural circuits. Besides these driving biomedical projects, MMBioS engages in a large number of collaborations with experimental and computational research groups. In addition, information and technology is disseminated to the larger scientific community through MMBioS’ website, training workshops, and tutorials.

    “This phenomenal type of joint effort is extremely useful,” said Ivet Bahar, Ph.D., distinguished professor and John K Vries chair, department of computational and systems biology, School of Medicine, University of Pittsburgh. “The problems we are dealing with are much more complicated than an individual laboratory can handle. Our role is to build the technology, which we devise in response to existing research needs and challenges.

    “The computations we develop are very fast, efficient, and inexpensive so in silico experiments can minimize the wet lab benchtop effort. Computations serve two important roles: they help interpret experimental data in the framework of well-defined quantitative models and methods, and they help build new hypotheses, which are then tested experimentally.”

    Pittsburgh also has a new BD2K (Big Data to Knowledge) Center of Excellence. This BD2K project, called the Center for Causal Modeling and Discovery, is a collaboration between the University of Pittsburgh, Carnegie Mellon University, the Pittsburgh Supercomputing Center, and Yale University.

    The project’s diverse participants include Carnegie Mellon’s philosophy department. Causality and logic models used for a variety of applications will be further expanded to biomedical Big Data to gain insight into mechanisms of function and to understand relationships important for therapy, especially personalized medicine. A short course for teaching causal modeling techniques is currently being organized.

  • Large Systems Perspectives

    Simple solutions do not solve problems in complex systems. At IPQ Analytics, disease-agnostic models take a large-system perspective and span the entire patient experience, from conditions preceding illness onset to symptom display, diagnosis, treatment decision, physician compliance, patient adherence, and outcome.

    Because the basic disease process remains the same, many elements are present in all diseases. Specific risk factors, however, may be weighted differently, and new elements may be added to customize and extend the general model. For example, in a rare pediatric disease, the model was extended to look at pregnancy history and in utero exposures, factors that are also relevant in breast cancer.

    “We need to consider the complete system and think about the real-world problem before we can ask the right questions,” insisted Michael Liebman, Ph.D., managing director of IPQ Analytics. “Transitioning data to information to knowledge to clinical utility is difficult. We try to identify how large the gap is, what crucial issues need to be addressed, and what questions need to be answered.

    “If the patient is at the top of a pyramid and you work to fill in only the pieces necessary to answer critical questions, then the pyramid remains stable. If you build from the bottom up, as each new technology develops a block, something will always be missing to complete the base, making the pyramid unstable.”

    Some data may be expensive and hard to collect. Modeling enables evaluation of the impact of missing information, and it allows identification and prioritization of what the model needs to make it more precise in its predictions.

    In the modeling of breast cancer risk, the personalized history of the patient, which may contain information such as changes in weight over the patient’s lifetime and time of menarche, will be scrutinized. The modeling will do so while recognizing breast cancer fundamentals, such as the concept that the breast undergoes developmental change throughout a woman’s lifetime, and the concept that hormonal changes, which produce long-term effects, are influenced by body fat and other factors.

    These changes need to be appreciated if modeling it to capture the understanding that risk is not uniform over a woman’s lifetime but varies from stage to stage of personal development. More in-depth analysis of specific regulatory pathways and molecular processes at each stage of development may point to sources of risk and help identify better biomarkers and ways to manage or prevent the disease.

    • Making Data Accessible

      Basic clinical and outcomes research data must be accessible, ideally not just to investigators within an institution, but also across institutions, which may include pharmaceutical companies. Such accessibility is the aim of SPIRIT (Software Platform for Integrated Research Information and Transformation), an integrated research information platform. SPIRIT is designed to enable the integration of in-house, open source, and commercial off-the-shelf applications for the City of Hope (COH).

      “We wanted to develop the platform not just to integrate the data and serve the operational needs, but also as a springboard to put together proof of concepts and new applications, such as machine learning and biomedical natural language processing pipelines, which allow us to analyze data and provide results much faster,” discussed Ajay Shah, Ph.D., director of research informatics and systems, COH National Medical Center.

      Yet much medical data are free-text notes that are challenging to extract and put into a coded searchable database. To address free-text notes, COH combined open-source packages into a platform for biomedical natural language processing. To facilitate the standardization of biomedical and clinical data, COH’s SPIRIT platform leverages the Unified Medical Language System (UMLS), a set of files and software that brings together many health and biomedical vocabularies and standards to enable interoperability between computer systems.
      COH also uses i2b2 (Informatics for Integrating Biology and the Bedside), which was funded by the NIH and developed by the nonprofit organization Partners Healthcare to provide a platform for integrating data across biomedical domains. At COH, i2b2 is used primarily for cohort identification and the determination of biospecimen availability for clinical trials, and it is integrated with the clinical trial management system, cancer registry, biospecimen database, and other data sources via the enterprise data warehouse.
      SPIRIT remains a work in progress. Technical challenges include codification of data, keeping the primary system updated, reconciling differences in ontologies, and such. IP challenges and goal-alignment efforts must also be dealt with, even while a data-sharing culture is inculcated.
      The enormous power of Big Data is exemplified by several initiatives, such as the Shared Health Research Information Network (SHRINE), which enables data integration from various institutions via i2b2, and the Oncology Research Information Exchange Network (ORIEN), which is an evolving partnership between North American premier cancer institutes to leverage multiple data sources and match patients to targeted treatments.
    • Processing Data Faster

      Click Image To Enlarge +
      The Cray Urika-XA analytics server represents the convergence of supercomputers and analytics. Containing over 1,500 cores, the Urika-XA transitions a data center from batch-mode processing to low-latency fast analytics.

      Scientific innovation is progressing at a faster pace than most organizations’ ability to refresh their IT infrastructure; big data requires compute density. For example, usage and technical innovation are driving down sequencing costs, which continues to fuel the informatics demand for higher throughput and accuracy.

      The Cray Urika-XA analytics server contains 48 nodes, over 1,500 cores with 6 TB of RAM, a 38 TB solid-state drive, and a 120 TB POSIX-compliant parallel file system. The small-footprint server is preconfigured and delivered with Hadoop and Spark, is optimized for use at high density, and offers a lower total cost of ownership for a normal data center life cycle of three to five years.

      Because the server has over 1,500 cores, it can run more than 1,500 compute events simultaneously, two to three times the density of other platforms.

      “What makes Urika so cool is the compute density. This convergence of supercomputers and analytics allows scaling from proof of concept to production in the same environment,” stated David Anstey, global head of life sciences at Cray. “You can get more done faster. Think about the possibilities if there were no constraints. What would the impact be if you could ask tougher, more probing questions in an iterative way?”

      A combination of technology and people’s ability to leverage that technology effectively, Anstey insisted, will determine how fast precision medicine evolves.

      The Urika-XA can transition a data center from batch-mode processing to low-latency fast analytics. End users can run their own jobs while the software handles the workflow, simplifying the scientific analysis.

      A large cancer group’s analysis of over 30,000 samples, where the goal was to look at the effect of genetic mutation on gene expression, previously took 6 minutes per sample to complete, almost 3,600 hours for the panel. Rerunning this analysis on Spark using Urika-XA decreased the analysis time to 20 minutes, demonstrating the effectiveness of using in-memory analytics across a significant amount of compute.

      The storage capacity of the Urika-XA platform can allow data to be augmented with additional information, such as metabolic and lifestyle histories, and then reanalyzed without data movement, minimizing expense.

IBM’s Watson Now Tackles Clinical Trials At MD Anderson Cancer Center.


IBM continues to expand the use of its Watson supercomputer from winning Jeopardy to handling incoming call-center questions to guiding cancer doctors at Memorial Sloan Kettering to better diagnoses. Today it announced a new pilot program for Watson at Houston’s renowned MD Anderson Cancer Center. The institution has been trying out Watson for a little under a year in its leukemia practice as an expert advisor to the doctors running clinical trials for new drugs.

According to the FDA, some $95 billion is spent on clinical trials each year and only 6% are completed on time. Even at the best cancer centers, doctors running trials are feeling around in the dark. There are so many variables in play for each patient and clinicians traditionally only look at the few dozen they feel are the most important. Watson, fed with terabytes of general knowledge, medical literature and MD Anderson’s own electronic medical records, can riffle through thousands more variables to solve the arduous task of matching patients to the right trial and managing their progress on new cancer drugs.

“It’s still in testing and not quite ready for the mainstream yet, but it has the infrastructure to potentially revolutionize oncology research,” says MD Anderson cancer doctor Courtney DiNardo. “Just having all of a patient’s data immediately on one screen is a huge time saver. It used to take hours sometimes just to organize it all.”

In a blog post published today, Dr. DiNardo gave an example of filling in for a colleague who was out of town. She had to meet with one of his patients who had a particularly complicated condition that needed a management decision. “Under normal circumstances, it may have taken me all afternoon to prepare for the meeting with enough insight to provide the most appropriate treatment decisions. With Watson, I am able to get a patient’s history, characteristics, and treatment recommendations based on my patient’s unique characteristics in seconds.”

Watson isn’t taking over the work from doctors. It’s more of an advisor, giving evidence-based treatment advice based on standards of care, while providing the scientific rationale for each choice it makes. Doctors can click on any option and drill down to the medical literature or patient data used to generate that option, along with the level of confidence Watson has in that source. “It’s tracking everything and alerts you when something’s wrong, like when you need to start a patient on prophylactic antibiotics if they have severe neutropenia (an abnormally low number of a certain type of white blood cell),” said Dr. DiNardo. “We might know that here at MD Anderson because we see hundreds of leukemia patients, but a less expert center might not. Now everyone in the world becomes a leukemia expert.”

So far only ten oncologists on faculty at MD Anderson have been involved in the pilot and, while Watson is using real patient data, its advice hasn’t been applied to real patients yet. That might come in early 2014. But for DiNardo, just the Big Data opportunity alone of using Watson to look across all of genomics and molecular research while assimilating every structured and unstructured byte of patient data, is tantalizing. “The potential is really amazing,” she says.

What would big data think of Einstein?


A friend of mine recently remarked on the uncanny ability of Netflix to recommend movies that he almost always finds interesting. Amazon, too, barrages email inboxes with book recommendations, among other things. Indeed, the entire advertising industry has been transformed by its ability to use data to target individual consumers in ways unimaginable in the Mad Men era.

The power of big data goes far beyond figuring out what we might want to know. Big data helps pharmaceutical companies identify the attributes of their best sales people, so they can hire, and train, more effectively. Big data can help predict what songs are likely to be hits, which wine vintages will taste better and whether chubby baseball pitchers have the right stuff.

But big data should not be confused with big ideas. It is in those ideas — the ones that make us conjure up the image of Albert Einstein — that lead to breakthroughs.

The benefits of big data are so, well, big, that there’s no going back. Yet I don’t need to re-read George Orwell, or scan the latest headlines about the massive snooping of personal communications orchestrated by theNational Security Agency in the United States to feel at least some discomfort with big data’s side effects. One that seldom gets notice: in a world where massive datasets can be analysed to identify patterns not easily identified using simpler analogue methods, what happens to genius of the Einstein variety?

Genius is about big ideas, not big data. Analysing the attributes and characteristics of anything is guaranteed to find some patterns. It is inherently a theoretical exercise, one that requires minimal thought once you’ve figured out what you want to measure. If you’re not sure, just measure everything you can get your hands on. Since the number of observations — the size of the sample — is by definition huge, the laws of statistics kick in quickly to ensure that significant relationships will be identified. And who could argue with the data?

Companies, like civilisations, advance by leaps and bounds when genius is let loose, not when genius is locked away and deemed too out of the mainstream of data-driven knowledge.

Unfortunately, analysing data to identify patterns requires you to have the data. That means that big data is, by necessity, backward-looking; you can only analyze what has happened in the past, not what you can imagine happening in the future. In fact, there is no room for imagination, for serendipitous connections to be made, for learning new things that go beyond the data. Big data gives you the answer to whatever problem you might have (as long as you can collect enough relevant information to plug into your handy supercomputer). In that world, there is nothing to learn; the right answer is given.

I like right answers as much as the next guy, but in my experience those answers are just not enough to motivate people to action. For instance, knowing that email follow-ups to sales calls are the most time efficient is nice, but that fact is unlikely to convince a salesperson who has always picked up the phone to change his approach, especially if his approach has always worked for him.

People don’t think in the same way that data behaves. They need to be convinced, they want to be part of the creation of the solution. They don’t like the solution to be imposed on them. You can have all the “optimal” solutions you like, but in the real world managers need to convince other people to execute on those solutions. And people have a habit of wanting to contribute to the development of the solutions.

In business, big data doesn’t necessarily drive out creativity; it’s just that its scientific imprimatur makes it very hard to argue the opposite way. Yes, it is possible for creative people to start further down the field when they have a deeper understanding of the underlying relationships that govern their discipline. Advertisers can design better campaigns if they truly understand what consumers are buying and why. But sometimes you need to break the rules to create anything new. Apple’s original iPod was such a hit precisely because it emphasised simple and elegant design features rather than what everyone else was competing on — MP3 sound quality.

Just as companies that build their business on “best practice,” ensure that they will never do more than anyone else, companies that let big data dominate their thinking and management style will not be the ones who change the rules of the game in their industry. Even in the leading repository of big data thinking — Silicon Valley — how many start-ups have taken form specifically to capitalise on big data insights? Not Facebook, not Google, and definitely not Apple. These companies actively leverage big data to grow their businesses, but the spark that led to their creation was personal, entrepreneurial and even idiosyncratic.

The inability to understand or capture the human element — that personal, even idiosyncratic, thinking that drives genius — in business is the biggest danger that comes from big data. Has there ever been a major breakthrough whose origin doesn’t reside in the brain of a man or a woman? Imagine in the not-too- distant future a brilliant person, a genius, proclaiming a new way of thinking that is contrary to big data. What would happen to her ideas if she bucked the orthodoxy of big data to suggest a different view of the world not consistent with the dominant digitally derived solution? We might lock up her ideas. If anyone paid attention to what she said, she would be denounced as uninformed.

Companies, like civilisations, advance by leaps and bounds when genius is let loose, not when genius is locked away and deemed too out of the mainstream of data-driven knowledge.

What if Albert Einstein lived today and not 100 years ago? What would big data say about the general theory of relativity, about quantum theory? There was no empirical support for his ideas at the time — that’s why we call them breakthroughs.

Today, Einstein might be looked at as a curiosity, an “interesting” man whose ideas were so out of the mainstream that a blogger would barely pay attention. Come back when you’ve got some data to support your point.

Source:BBC