Scientists Preparing to Turn on Computer Intended to Simulate Entire Human Brain


“IF YOU ARE TRYING TO UNDERSTAND THE BRAIN THIS WILL BE THE HARDWARE TO DO IT ON.”

Brain Stormer

Researchers at Western Sydney University in Australia have teamed up with tech giants Intel and Dell to build a massive supercomputer intended to simulate neural networks at the scale of the human brain.

They say the computer, dubbed DeepSouth, is capable of emulating networks of spiking neurons at a mind-melting 228 trillion synaptic operations per second, putting it on par with the estimated rate at which the human brain completes operations.

The project was announced at this week’s NeuroEng Workshop hosted by Western Sydney’s International Centre for Neuromorphic Systems (ICNS), a forum for luminaries in the field of computational neuroscience.

Once operational in April of next year, DeepSouth could provide researchers with an unparalleled look at how the human brain processes information.

“Progress in our understanding of how brains compute using neurons is hampered by our inability to simulate brain like networks at scale,” said ICNS director and Western Sydney professor André van Schaik in a statement.

Spiking Neurons

Instead of aiming for DeepSouth to become the most powerful conventional supercomputer in the world, the researchers are looking to simulate the brain’s network of neurons using a “neuromorphic system which mimics biological processes,” per the press release.

They say the result is a more efficient and less power-hungry supercomputer, built from the ground up to simulate synaptic activity in the human brain.

In simple terms, neuromorphic computing involves performing a lot of operations at once while only moving very little data, which makes it consume far less energy as well.

“Simulating spiking neural networks on standard computers using Graphics Processing Units (GPUs) and multicore Central Processing Units (CPUs) is just too slow and power intensive,” van Schaik explained in the statement. “Our system will change that.”

The researcher and his team are hoping to “progress our understanding of the brain and develop brain-scale computing applications in diverse fields including sensing, biomedical, robotics, space, and large-scale AI applications.”

For instance, the tech could lead to the development of advanced smart devices or allow AI models to consume less power.

Other researchers are already excited about what the future of DeepSouth could hold.

“At the end of the day there’s two types of researchers who will be interested in this — either those studying neuroscience or those who want to prototype new engineering solutions in the AI space,” Johns Hopkins computer engineering professor Ralph Etienne-Cummings, who was not involved in the project, told New Scientist.

“If you are trying to understand the brain this will be the hardware to do it on,” he added.

Simulated universe hypothesis: Are we really living in a Matrix-like computer simulation?


Could we all be living in a vast computer simulation just like “The Matrix”? A University of Portsmouth physicist is diving deep into this idea, drawing on a new “law of physics” to potentially support the theory that our reality might be artificially constructed.

This concept, known as the “simulated universe hypothesis,” suggests that everything humans experience is similar to a computer game – a created reality where even we might just be advanced avatars. Some, like tech magnate Elon Musk, have expressed belief in this theory. It finds footing in the field of information physics, which theorizes that the essence of physical reality is actually “bits” of information.

Dr. Melvin Vopson’s prior work has delved into the idea that such information not only exists but also has mass. Just as humans have DNA that encodes who we are, the most basic particles in the universe may carry information about their own identities.

A major discovery by Dr. Vopson in 2022 suggested a revolutionary law of physics linked to genetic mutations in organisms, including viruses. It’s a twist on the second law of thermodynamics, a principle stating that disorder in a closed system will always either increase or remain unchanged. Contrary to this, Dr. Vopson observed that in information systems, disorder, or entropy, could either remain the same or even decrease. This revelation gave birth to what he calls the “second law of information dynamics,” or infodynamics.

His study delves into the vast consequences of this new law on everything from genetics to atomic physics and the study of the universe.

“I knew then that this revelation had far-reaching implications across various scientific disciplines,” says Dr. Vopson in a university release. “What I wanted to do next is put the law to the test and see if it could further support the simulation hypothesis by moving it on from the philosophical realm to mainstream science.”

Key revelations from his study include:

  • Biology: This new law might change our understanding of how genes mutate. Such insights could drastically influence fields ranging from genetic therapies and drug development to studying viruses and predicting pandemics.
  • Atomic Physics: Dr. Vopson’s work might explain certain behaviors of electrons in atoms, shedding light on the stability of chemicals.
  • Cosmology: His findings underscore the importance of the new infodynamics law in understanding the universe’s expansion.
YouTube video

“The paper also provides an explanation for the prevalence of symmetry in the universe,” explains Dr. Vopson. “Symmetry principles play an important role with respect to the laws of nature, but until now there has been little explanation as to why that could be. My findings demonstrate that high symmetry corresponds to the lowest information entropy state, potentially explaining nature’s inclination towards it.”

Linking his research to the broader mysteries of the universe, Dr. Vopson suggests that information could be the elusive dark matter, a mysterious substance believed to constitute almost a third of the universe. His work bolsters the idea that information is as real as mass and energy.

“The next steps to complete these studies require empirical testing,” adds Dr Vopson. “One possible route would be my experiment devised last year to confirm the fifth state of matter in the universe – and change physics as we know it – using particle-antiparticle collisions.”

Are we living in a simulation?


Are we real? Can you prove it? From Descartes to The Matrix, doubting the perception of reality has been an intriguing topic of conversation for centuries. With a tremendous increase in the development of tech, we explore the blurry lines of having our minds tricked into believing that a certain reality is actually a digital creation.

Paul Coventry

Paul Coventry

Future in 2030

What is simulation theory? 

Simulation theory is the concept that we are all virtual beings living in a computer simulation. So, it’s no surprise that the subject is bubbling to the surface again taking into consideration the radical development and speed of the transformation of technology.

But where did this hypothesis originate?

For this, we have to go way back in time, as far back as ancient China in fact, where we are introduced to an ancient Chinese text, Zhuangzi, written by Daoist philosopher, Zhuang Zhou. The story goes that Zhuang Zhou had a dream that he was a butterfly, with no recollection of his human form. He’d explore the world with his erratic fluttering pattern unique to butterflies, swimming through the air without a care in the world, until…

Suddenly Zhuang Zhou awakens. He feels for his legs and pinches his skin just hard enough for it not to be too painful. It was all a dream; Zhuang Zhou is human after all. Or is he? Could it be the reverse? That he’s a butterfly dreaming he’s a human?

Can dreams really be a sign of simulation?

Can dreams really be a sign of simulation?

The writing is on the wall—literally. Zhuang Zhou wrote about his belief that this was a transformation of consciousness and awareness. One lay in reality; the other in illusion. Here lay the divergence of mental states. From this, a philosophical dilemma was born.

Other theories of course followed. René Descartes and Plato both queried the role our senses act as a catalyst for pure perceptions of reality.

Fast forwarding to more modern times, the Swedish-born philosopher, Nick Bostrom has made several proposals based primarily around a “posthuman” civilization. In his published paper, Are You Living in A Computer Simulation? he puts forward the question: “If there were a substantial chance that our civilization will ever get to the posthuman stage and run many ancestor‐simulations, then how come you are not living in such a simulation?”

Rebecka Cedering-Ångström, Principal Researcher, Consumer and Industry Lab touches on a similar tone: “One of the perks with simulations is that you can play out or test different scenarios, which we already do today.” She continues: “Creating simulations of the historical Earth would generate a lot of interesting insights! For instance, what would have happened if humanity invented a technology or made a scientific breakthrough earlier or later than expected, like the vaccine? What would have happened if we solved, or didn’t solve, the climate crisis in time? How would different decisions in our history yield different futures for humanity? What would life on Earth be like if humans could breathe under water, or read minds? If we can make life-like and real simulations in the future, imagine what we could explore (provided it’s ethically and morally accepted)!”

Connected intelligent machines: Read the report

Connected intelligent machines: Read the report

Discover the ten different roles that consumers expect connected intelligent machines to take in everyday life during the coming decade.Click here

How close are we to making the physical world fully programmable?

The internet of senses is one of three key drivers we’ve established at Ericsson that are most significant to the evolution of the network platform and are all related to bridging the gap between physical reality and the digital realm. The development of cyber-physical systems takes us even closer to producing a believable digital reality.

Another key driver is connected intelligent machines. As their capabilities to self-learn and interact and communicate with each other grow stronger—particularly with support from AI-to-AI communication and autonomous systems—the more likely we’ll see them generating hypotheses and reasoned arguments, making recommendations, and taking actions. That said, will intelligent machines ever be able to achieve the heights of artificial general intelligence? Moreover, will we be able to create a simulation embedded in a believable reality? The closest to a believable simulation right now is probably within the gaming industry. It may come as no surprise that the first such merged visual experience is projected by consumers to be in this field. More than 7 in 10 respondents believe VR game worlds will look indistinguishable from physical reality by 2030.

Virtual reality is also providing much more innovative opportunities for the virtual representation of a real-world entity.

Digital Twin connectivity

Whereas simulations can help us to measure and analyze our way through proposed changes in a virtual environment, a digital twin can recreate digital representations using real-time data as well as help us understand alternative scenarios. As a virtual representation of a physical object or system, a Digital Twin acts as a mirror to provide a means to simulate, predict, forecast behavior, and possibly control the object where applicable. It can be viewed through a digital interface, such as virtual reality glasses. The Ericsson Operations AI Framework was used as a reference for the design of the Digital Twins, and it has been a critical solution in some of our most large-scale operations.

Take our work with the Port of Livorno for example. Here, a digital twin allowed workers to operate at the intersection of cyber and physical worlds. Data collected through 5G fed an Ericsson-developed digital twin engine, which developed a virtual replica of the port area in real time. The virtual environment enabled a faster and better management of the general cargo and optimized the intra-terminal operations. This augmented-reality approach could well be the innovation platform for the future of critical port operations.

Find out more about the work with Digital Twins at the Port of Livorno.

Find out more about the work with Digital Twins at the Port of Livorno.

The internet of senses

When The Matrix was released in 1997, it popularized the simulation theory across the globe. With the fourth Matrix movie set for release later in 2021, it’s an overwhelming thought of just how much technology has changed since the original. Although, in today’s reality, we still may not be living in a simulation, we are certainly far more equipped to create a believable virtual simulation.

Artificial protagonists are even becoming prominent in mainstream literature. In Nobel Prize winner, Kazuo Ishiguro’s new novel, Klara and The Sun; Klara is an “artificial friend”, purchased by parents for the sole purpose of becoming a trusty companion to their teenage kids. Solar-powered Klara views the world as you’d imagine an innovative machine brain would; digesting each segment of land in grid form, as if using an algorithm where potential danger is highlighted using red squares. Klara seems to represent the notion that although AI is rapidly gaining more intelligence and showing up in more areas of our lives, it still lacks the unmatched internal depth of emotional senses felt by humans.

We may not be able to create a simulated world right now, but the creations that are apparent in our world are making it difficult to distinguish from the digital. Today, there’s even a white-collar employee perspective on a potential future reality where digital technology by 2030 could interact with our senses. It begs the question, could this be the future of the workplace?

With hundreds of billions of connected physical objects using embedded sensing, actuation, and computing capabilities to continuously generate informative data, an accurate digital representation of the physical environment is becoming more of a reality than ever.

An Ericsson report, The Dematerialized Office, which delves into the internet of senses in a 2030 future workplace, found that employees want an internet of senses for work. It seems employees are keen to experience the full-sense digital home office, with over half of respondents disclosing a need for a digital workstation allowing full-sense presence at work from anywhere. Even outside of the office, immersive full-sense sales environments, such as a virtual shopping mall or warehouse ­– that create the opportunity to digitally try before you buy – seem to be favored by the majority of those surveyed.

We’ve even reached the point where we might be accepting digital snacks from our colleagues. 73 percent of senior managers believe that food in the company canteen could be digitally enhanced to taste like anything by 2030.

Read all about the dematerialized office: The internet of senses goes to work.

Read all about the dematerialized office: The internet of senses goes to work.

Michael Björn, Head of Research Agenda at Ericsson’s Consumer Lab, sums up his thoughts: “Asking ourselves if we’re living in a simulation may be interesting from a philosophical perspective, but even if we are, we may never find out. Meanwhile, it’s easy to observe that digital and physical realities are continuing to merge at a rapid pace.

“Take money for example: It has now been such a long time since I used actual banknotes and coins that I only vaguely know the designs on the respective denominations. And I haven’t been to a bank in many years. My salary is just a number in a bank account that I access with credit cards, PayPal, Apple Pay and so on. But I still think of it as real money, not a simulation.

“Similarly, I believe we will continue to digitalize other everyday activities – including sensory experiences like touch, taste and smell. But since this will happen gradually, we’ll be able to integrate these digital and physical aspects in a very natural way. Now, if I could just get one of those digitally enhanced veggie burgers, please…”

Rebecka Cedering-Ångström agrees that in a sense, we may never know if we live in a simulation. But that shouldn’t stop us from tackling what’s in front of us: “If we live in a simulation, the ‘reality’ we experience is set by parameters that do not have to correspond with the ’reality’ outside at all. In fact, this might be a unique reality, and the earth and humanity as we know it may not even exist out there. In that case, you could say we live in our (own) reality. Therefore, no matter if we live in a simulation or not, we need to keep focusing on our current challenges. We have a climate crisis on our hands for instance, and we need to solve it. Because this is most likely the ONLY reality we will get.”

The journey from simulation to digital twin (or digital veggie burger) involves a staggering leap in technological development. Accuracy will always be the major factor that verifies the believability and acceptance of the technology in years to come.

Ultimately, it appears we are indeed real…for now. But the immense potential of our capabilities as humans to advance technologies and create an authentic digital reality is undeniable. Yet it almost leaves us with more questions: How much longer will we be able to fully trust our senses? Or how long will it take before simulations become a full-blown reality in our everyday lives? 2030 suddenly doesn’t feel too far away…

Variable or Fixed? Exploring Entrustment Decision Making in Workplace- and Simulation-Based Assessments


Abstract

Purpose: 

Many models of competency-based medical education (CBME) emphasize assessing entrustable professional activities (EPAs). Despite the centrality of EPAs, researchers have not compared rater entrustment decisions for the same EPA across workplace- and simulation-based assessments. This study aimed to explore rater entrustment decision making across these 2 assessment settings.

Method: 

An interview-based study using a constructivist grounded theory approach was conducted. Gastroenterology faculty at the University of Toronto and the University of Calgary completed EPA assessments of trainees’ endoscopic polypectomy performance in both workplace and simulation settings between November 2019 and January 2021. After each assessment, raters were interviewed to explore how and why they made entrustment decisions within and across settings. Transcribed interview data were coded iteratively using constant comparison to generate themes.

Results: 

Analysis of 20 interviews with 10 raters found that participants (1) held multiple meanings of entrustment and expressed variability in how they justified their entrustment decisions and scoring; (2) held personal caveats for making entrustment decisions “comfortably” (i.e., authenticity, task-related variability, opportunity to assess trainee responses to adverse events, and the opportunity to observe multiple performances over time); (3) experienced cognitive tensions between formative and summative purposes when assessing EPAs; and (4) experienced relative freedom when using simulation to formatively assess EPAs but constraint when using only simulation-based assessments for entrustment decision making.

Conclusions: 

Participants spoke about and defined entrustment variably, which appeared to produce variability in how they judged entrustment across participants and within and across assessment settings. These rater idiosyncrasies suggest that programs implementing CBME must consider how such variability affects the aggregation of EPA assessments, especially those collected in different settings. Program leaders might also consider how to fulfill raters’ criteria for comfortably making entrustment decisions by ensuring clear definitions and purposes when designing and integrating workplace- and simulation-based assessments.

The Biggest Simulation of the Universe Yet Stretches Back to the Big Bang


Remember the philosophical argument our universe is a simulation? Well, a team of astrophysicists say they’ve created the biggest simulated universe yet.  But you won’t find any virtual beings in it—or even planets or stars.

The simulation is 9.6 billion light-years to a side, so its smallest structures are still enormous (the size of small galaxies). The model’s 2.1 trillion particles simulate the dark matter glue holding the universe together.

Named Uchuu, or Japanese for “outer space,” the simulation covers some 13.8 billion years and will help scientists study how dark matter has driven cosmic evolution since the Big Bang.

Dark matter is mysterious—we’ve yet to pin down its particles—and yet it’s also one of the most powerful natural phenomena known. Scientists believe it makes up 27 percent of the universe. Ordinary matter—stars, planets, you, me—comprise less than 5 percent. Cosmic halos of dark matter resist the dark energy pulling the universe apart, and they drive the evolution of large-scale structures, from the smallest galaxies to the biggest galaxy clusters.

Of course, all this change takes an epic amount of time. It’s so slow that, to us, the universe appears as a still photograph. So scientists make simulations. But making a 3D video of almost the entire universe takes computer power. A lot of it. Uchuu commandeered all 40,200 processors in astronomy’s biggest supercomputer, ATERUI II, for a solid 48 hours a month over the course of a year. The results are gorgeous and useful. “Uchuu is like a time machine,” said Julia F. Ereza, a PhD student at IAA-CSIC.

“We can go forward, backward, and stop in time. We can ‘zoom in’ on a single galaxy or ‘zoom out’ to visualize a whole cluster. We can see what is really happening at every instant and in every place of the Universe from its earliest days to the present…”

Perhaps the coolest part is that the team compressed the whole thing down to a relatively manageable size of 100 terabytes and made it available to anyone. Obviously, most of us won’t have that kind of storage lying around, but many researchers likely will.

This isn’t the first—and won’t be the last—mind-bogglingly big simulation.

Rather, Uchuu is the latest member of a growing family tree dating back to 1970, when Princeton’s Jim Peebles simulated 300 “galaxy” particles on then-state-of-the-art computers.

While earlier simulations sometimes failed to follow sensible evolutionary paths—spawning mutant galaxies or rogue black holes—with the advent of more computing power and better code, they’ve become good enough to support serious science. Some go big. Others go detailed. Increasingly, one needn’t preclude the other.

Every few years, it seems, astronomers break new ground. In 2005, the biggest simulated universe was 10 billion particles; by 2011, it was 374 billion. More recently, the Illustris TNG project has unveiled impressively detailed (and yet still huge) simulations.

Scientists hope that by setting up the universe’s early conditions and physical laws and then hitting play, their simulations will reproduce the basic features of the physical universe as we see it. This lends further weight to theories of cosmology and also helps explain or even make predictions about current and future observations.

Astronomers expect Uchuu will help them interpret galaxy surveys from the Subaru Telescope in Hawaii and the European Space Agency’s Euclid space telescope, due for launch in 2022. Simulations in hand, scientists will refine the story of how all this came to be, and where it’s headed.

Neil deGrasse Tyson thinks the universe might be a simulation


We trust the scientists around us to have the best grasp on how the world actually works.

So at this year’s 2016 Isaac Asimov Memorial Debate at the American Museum of Natural History, which addressed the question of whether the universe is a simulation, the answers from some panelists may be more comforting than the responses from others.

neil degrasse tyson on space survival

Physicist Lisa Randall, for example, said she thought the odds that the universe isn’t “real” are so low as to be “effectively zero.”

A satisfying answer for those who don’t want to sit there puzzling out what it would mean for the universe not to be real, to be sure.

But on the other hand, astrophysicist Neil deGrasse Tyson, who was hosting the debate, said that he thinks the likelihood of the universe being a simulation “may be very high.”

Uh oh?

The question of whether we know that our universe is real has vexed thinkers going far back into history, long before Descartes made his famous “I think therefore I am” statement. The same question has been explored in modern science-fiction films like “The Matrix” and David Cronenberg’s “eXistenZ.”

But most physicists and philosophers agree that it’s impossible to prove definitively that we don’t live in a simulation and that the universe is real.

Tyson agrees, but says he wouldn’t be surprised if we were to find out somehow that someone else is responsible for our universe.

matrix code

One of the main arguments that physicists use to talk about what’s known as the “simulation hypothesis” is that if we can prove that it’s possible to simulate a universe — if we can figure out all the laws that govern how everything works (which physicists are trying to do) — that makes it much more likely that it is actually simulated. If we know that it’s possible to do something, it’s much easier to think that thing is being done.

We haven’t been able to figure out how to simulate a universe — yet. But it’s not too hard to imagine that some other creature out there is far smarter than us.

Tyson points out that we humans have always defined ourselves as the smartest beings alive, orders of magnitude more intelligent than species like chimpanzees that share close to 99%of our DNA. We can create symphonies and do trigonometry and astrophysics (some of us, anyway).

But Tyson uses a thought experiment to imagine a life-form that’s as much smarter than us as we are than dogs, chimps, or other terrestrial mammals.

“What would we look like to them? We would be drooling, blithering idiots in their presence,” he says.

Whatever that being is, it very well might be able to create a simulation of a universe.

“And if that’s the case, it is easy for me to imagine that everything in our lives is just the creation of some other entity for their entertainment,” Tyson says. “I’m saying, the day we learn that it is true, I will be the only one in the room saying, I’m not surprised.”

Bring Simulation to Everyone in Your Organization with Apps


Make Mathematical Models Accessible to All Throughout Your Organization

The traditional computational modeling workflow involves creating a geometry, defining all of the necessary materials and physics, meshing and solving the model, and visualizing and postprocessing the results. Making any changes thereafter requires going back to previous steps and redoing them, which demands intimate knowledge of the original model. The computational tools required to make these mathematical models are so complicated to use that there are very few engineers trained to build and manipulate them.

Now, engineers can use COMSOL Multiphysics® software to instead wrap their model in a user-friendly interface that allows them or someone else to focus on the changes that matter – without requiring foreknowledge of the underlying model.

CREATE a computational model
in the Model Builder.

BUILD a customized application
in the Application Builder.

MANAGE your apps in the COMSOL Server™ software
throughout your organization

DEPLOY your apps
to your organization.

Amplify the Simulation Value at Your Organization

Using the Application Builder, which is included in the COMSOL Multiphysics® software, engineers can build applications with intuitive user interfaces that are fully customizable based on design needs and that suit a wide variety of purposes. Basic knowledge of how to use COMSOL Multiphysics is the only prerequisite to creating apps from a COMSOL® software model – no special training or additional software is needed.

 
CC
Off
English

The Application Builder was designed to make it easy to build applications. You can add user interface elements (e.g., buttons, inputs, outputs, graphs, and more) with the click of a button and quickly design powerful apps with the drag-and-drop feature. You can also take advantage of the built-in Java® API with tools that write code for you.

Deploy Apps with the COMSOL Server™ Software

To bring mathematical models to everyone within your company, you can use the COMSOL Server™ software as an app-distribution platform. Your colleagues can interface with COMSOL Server in order to run your apps on their own, over your organization’s private network. Watch the video below to see how it works and how the Application Builder and the COMSOL Server software fit within the simulation modeling workflow.

2:14
CC
Off
English

You can take your apps with you wherever you go throughout the world if you have a COMSOL Server™ license. With this platform, simulation apps can be accessed on computers as well as smartphones and tablets, and run in a web browser or in a free COMSOL desktop client.

The COMSOL Server™ license will allow you to host COMSOL Server on your own computer or your organization’s private network. Your verified users throughout the world can run your apps or their own on your COMSOL Server license, and they can run up to four sessions in parallel. Any person affiliated with an academic institution can run their apps on the Academic Server license for academic use. Furthermore, when you update your apps on COMSOL Server they will be made available immediately so that your users are always running the latest version.

Novel embalming solution for neurosurgical simulation in cadavers Laboratory investigation.


Surgical simulation using postmortem human heads is one of the most valid strategies for neurosurgical research and training. The authors customized an embalming formula that provides an optimal retraction profile and lifelike physical properties while preventing microorganism growth and brain decay for neurosurgical simulations in cadavers. They studied the properties of the customized formula and compared its use with the standard postmortem processing techniques: cryopreservation and formaldehyde-based embalming.

METHODS

Eighteen specimens were prepared for neurosurgical simulation: 6 formaldehyde embalmed, 6 cryopreserved, and 6 custom embalmed. The customized formula is a mixture of ethanol 62.4%, glycerol 17%, phenol 10.2%, formaldehyde 2.3%, and water 8.1%. After a standard pterional craniotomy, retraction profiles and brain stiffness were studied using an intracranial pressure transducer and monitor. Preservation time—that is, time that tissue remained in optimal condition—between specimen groups was also compared through periodical reports during a 48-hour simulation.

RESULTS

The mean (± standard deviation) retraction pressures were highest in the formaldehyde group and lowest in the cryopreserved group. The customized formula provided a mean retraction pressure almost 3 times lower than formaldehyde (36 ± 3 vs 103 ± 14 mm Hg, p < 0.01) and very similar to cryopreservation (24 ± 6 mm Hg, p < 0.01). For research purposes, preservation time in the cryopreserved group was limited to 4 hours and was unlimited for the customized and formaldehyde groups for the duration of the experiment.

CONCLUSIONS

The customized embalming solution described herein is optimal for allowing retraction and surgical maneuverability while preventing decay. The authors were able to significantly lower the formaldehyde content as compared with that in standard formulas. The custom embalming solution has the benefits from both cryopreservation (for example, biological brain tissue properties) and formaldehyde embalming (for example, preservation time and microorganism growth prevention) and minimizes their drawbacks, that is, rapid decay in the former and stiffness in the latter. The presented embalming formula provides an important advance for neurosurgical simulations in research and teaching.

Artificial worm starts to wriggle


C elegans
The project to create the C. elegans nematode in code should unlock more secrets of how it lives

A project to create artificial life has hit a key milestone – the simulated creature can now wriggle.

The Open Worm project aims to build a lifelike copy of a nematode roundworm entirely out of computer code.

This week the creature’s creators added code that gets the virtual worm wriggling like the real thing.

The next step is to hook the body up to a simulation of the worm’s brain to help understand more about how and why it moves.

Swim speed

The Open Worm project started in May 2013 and is slowly working towards creating a virtual copy of the Caenorhabditis elegans nematode. This worm is one of the most widely studied creatures on Earth and was the first multicelled organism to have its entire genome mapped.

The simulated worm slowly being built out of code aims to replicate C. elegans in exquisite detail with each of its 1,000 cells being modelled on computer.

Early work on the worm involved making a few muscle segments twitch but now the team has a complete worm to work with. The code governing how the creature’s muscles move has been refined so its swaying motion and speed matches that of its real life counterpart. The tiny C. elegans manages to move around in water at a rate of about 1mm per second.

“Its movement closely resembles published literature on how C. elegans swims,” project leader John Hurliman told the New World Notes blog.

The immediate next step for the project is to plug in the system used to model how nerve fibres in the worm fire to get muscle segments twitching and propelling the whole creature forward.

Soon the Open Worm creators hope to make a virtual version of C. elegans available online so people can interact with it via a web browser.

Black Hole Radiation Simulated in Lab.


For the first time, scientists have been able to simulate the type of radiation likely to be emitted from black holes.

A team of Italian scientists fired a laser beam into a chunk of glass to create an analogue (or simulation) of the Hawking radiation that many physicists expect is emitted by black holes.

A spokesperson for the research group said: “Although the laser experiment superficially bears little resemblance to ultra-dense black holes, the mathematical theories used to describe both are similar enough that confirmation of laser-induced Hawking radiation would bolster confidence that black holes also emit Hawking radiation.

The renowned physicist Stephen Hawking first predicted this sort of radiation in 1974 but it has proved elusive to detect, even in the lab. This research group was able to use a “bulk glass target” to isolate the apparent Hawking radiation from the other forms of light emitted during such experiments.

Black holes are region in space where nothing can escape, not even light. However, and despite their name, they are believed to emit weak forms of radiation (such as Hawking radiation). Physicists expect that this radiation may be so weak as to be undetectable.

Source: http://www.communicatescience.eu