Experiencing Emergence

The concept of emergence applies to a wide swath of our world from geographic formations to ant colonies and human groups.  The definitions have expanded since Aristotle’s description of the whole being something beyond the parts. In general terms, emergence refers to what happens in complex systems as a result of a lot of the collective interactions of many individual entities.  

The results are often unexpected and impossible to entirely predict based on the understanding of the entities alone. No mathematical models or discrete simulations can accurately represent the outcomes; however, when decisions are being made in our car dashboards every second(1) or billions might be spent sending a spacecraft to Mars(2), trusting outcomes is a requirement.

In his 2002 book A New Kind of Science, Stephen Wolfram describes computational irreducibility as the inability to accurately predict the results of complex systems.  The only way to discover what will happen is to run them.(3) There are no shortcuts.  The diversity of behaviour and interactions of the simple components provides indeterminate outcomes.  Adding more components, exponentiates the subtleties and variations of these outcomes.

Which brings us to where we are today, we need a way to run them.  We have complexity and, along with it, too many discoveries that happen only when systems are live.  Most digital, software, and other systems involve expanding integrations of devices and software, connected to other systems, relying on various types of networks, IoT, sensors, and services, all confronted by unprecedented scale.  Then add in the varied behaviour of users, A.I., algorithms, big data, innovative architectures, blockchain, and even environmental conditions. Whether it is an in-vehicle system or a smart airport, unexpected results in the field are unwelcome.

There are many examples of big repercussions of the unexpected.  How about the $2.2 billion unanticipated dollars to be spent by 2023 trying to solution Canada’s failed federal Phoenix Pay System(4) that by June of 2017, a year after it went live, had underpaid 51,000 and overpaid 59,000 hundreds of millions with no end in sight? Original project bid $309 million.   

Even those who have been in the tech business for awhile, can be blindsided.  On Sept 4th of 2018, some inclement weather caused a power spike that took out an Azure service center completely for part of a day, but by day 3, customers were still suffering latency and data access issues with no ETA for complete recovery announced.(5)  The failure of a failover, lack of available redundancy, and impacts of dependencies in a live environment are some of the guesses. The uncalculated consequences extend beyond Azure all the way to the customers of tech companies who rely on it.

More of the same doesn’t seem to make the difference.  Access to the greatest resources, sophisticated development teams, extensive test and QA processes, pricey price tags, … emergent behaviour doesn’t discriminate.  Whether it is payment systems that don’t pay, or recovery systems that don’t recover, F22’s that are stopped by the international dateline (6), accessible healthcare systems that people can’t access(7), or your next release, understanding emergent behaviour is critical.   

Circumventing the unexpected is RUNWITHIT Synthetics’ (RWI) speciality.  Creating Synthetic entities, vast amounts of them if required, each with their own specific realistically possible range of behaviours is the beginning.  Then you can coordinate all of the entity activity at the edges of any system to provide a complete living environment where even the elusive emergent characteristics are observable.  You can manipulate time and explore any sophisticated scenario of any size.

And this data from the future is real, produced by the system, end-to-end.  

From the development and engineering perspective, this experiential future data expedites diagnostics, solutioning, mitigations, optimization, innovations, and readiness.  Savings from building systems right are made all the better by the elimination of firefighting and rework.

For decision makers, this insight is strategic.  Unanswerable questions now become scenarios you can run with realism.  If you are one of the decision makers others count on to know it all, you can now make the future happen, experience what emerges, and have the time to make sure it will be good.


1) “The crash may be the first case of one of its autonomous cars hitting another vehicle and the fault of the self-driving car.” Shepardson, D. (2016, February 29). Google says it bears 'some responsibility' after self-driving car... Retrieved September 14, 2018, from https://reut.rs/2Ip06gn

2) “If you have a spacecraft in orbit around Mars, it’s 200 million miles away, it costs hundreds of millions of dollars, potentially even a billion dollars, to send there. If anything goes wrong, you’re done. There’s no way to repair, visit, replace that thing without spending an immense amount of money,” Wagstaff says. “So if we want to put machine learning in play, then the people running these missions need to understand what it’s doing and why, because why would they trust it to control their Mars rover or orbiter if they don’t know why it’s making the choices it’s making?” Gershgorn, Dave. “AI Is Now so Complex Its Creators Can't Trust Why It Makes Decisions.” Quartz, Quartz, 18 Dec. 2017, https://bit.ly/2xLBYRc

3) Stephen Wolfram: A New Kind of Science. (2002). Retrieved from https://bit.ly/2Nax4BS

4) “A new report from the Senate finance committee, released Tuesday, said the Phoenix system has failed to properly pay nearly 153,000 public servants – more than half of the federal civil service – since it was implemented more than two years ago. The report also found Phoenix has cost taxpayers $954-million to date – including the $309-million originally budgeted to develop the pay system” Zilio, M. (2018, August 01). Phoenix pay system problems on track to cost government $2.2-billion: Report. Retrieved September 14, 2018, from https://tgam.ca/2Otns7c

5) “It's unclear to what extent Microsoft understands those unspecified internal systems' dependencies and their potential impact” Montgomery, J. (2018, September 07). Azure outage spotlights cloud infrastructure choices. Retrieved September 14, 2018, from https://bit.ly/2xZbfQf

6) Slashdot. (2007, February 25). Retrieved September 14, 2018, from https://bit.ly/2DCk8oP

7) “Work on the website started in 2010. During the first two years, the project suffered from communication breakdowns and needlessly complex implementation” Muschick, P. (2016, February 25). Obamacare website crash stemmed from extreme government incompetence. Retrieved September `4, 2018, from https://bit.ly/2RdZtdM

8) “Toyota said Thursday a software glitch is to blame for braking problems in the 2010 model” Lah, K. (2010, February 04). Toyota: Software to blame for Prius brake problems. Retrieved September 14, 2018, from https://cnn.it/2OXtBIK