Frequently Asked Questions

Click to reveal answers

The Organisation

What does Simundo do?

Simundo is an open, independent, not-for-profit organisation. We advocate great acceleration in the progress of climate science by radically improving supercomputer performance for climate simulation and modelling (million-fold boost or better). We seek to gain far better understanding and predictive insight than current state-of-the-art computing facilities allow. Such forward knowledge, as is well established, is vital to decision making on how to avoid, mitigate, or abate global climate change. Simundo computers will not directly solve global warming, but they will help inform and enlighten, and ultimately develop and verify schemes for stabilising the climate (if necessary). Then, it is up to us all to enact real change.
Climate change is probably the toughest and most complex problem ever faced in modern times and we’re rising to that challenge with gusto. Simundo’s first act is to throw down the gauntlet, be polemical, challenge, and inspire. To this end we have started by asking what we believe to be the right questions and providing some prototypical solutions or blueprints as answers. The computer or Supermodel we propose is just that, a stake in the ground that says: here we are, this is how we can do it, now let’s put our heads together and do something even better! Insanely ambitious and wildly optimistic on the one hand, yet totally practical and down-to-earth on the other.
Simundo is first and foremost a benevolent organisation. And it is vital that Simundo is seen to be so in order that its climate science attains and maintains legitimacy, traction, and the appropriate influence. Transparency, openness, impartiality, and accountability are key. Our goals very much include engaging as broad an audience as possible, to popularise not only the “cool stuff”, but also the real underlying issues. We’d like to engender responsibility and, above all, an active, informed choice of the kind of future this planet will enjoy. Everybody everywhere shares one ecosystem, atmosphere, and its climate. Is there any other way to solve global climate challenges but to come together internationally and cooperate? Join us and help!

What is SiMUNDO & what does it stand for if anything?

The Simundo name can be construed in many ways, but originally was inspired by simulation and mundo meaning world: simulate the world. Si is also the chemical symbol for silicon, which is what computers are generally made of. There’s undo and UN in the name… Oh, and luckily the simundo.org domain was available for registration. For more on the word play here go to our what’s_in_a_name page.

Who runs Simundo?

The organisation is run by an growing band of volunteers. The founders who came up with the notion of a geodesic supercomputer (2007) continue to contribute actively. Simundo is in the process of becoming a social enterprise or may elect charitable status in the near future. Its directors/governors will be announced shortly when key people’s commitments are established.

How do you plan to finance & realise this project?

Like other Big Science projects the massive funding needed later on in the project will likely come from international governmental coffers, possibly under the auspices’ of the United Nations. However, in 2011 we are seeking the support of faster moving organisations such as charities and foundation, personal donations, and last but not least volunteer work. Up until now we have been volunteer funded, only embarking recently on fund raising. The funding required early is a tiny fraction of the cost incurred towards the end of the project. Help us and your children’s children, consider donating now!

The Science

What is Global Warming, or GW and GCC?

Simundo is an open, independent, not-for-profit organisation. We advocate great acceleration in the progress of climate science by radically improving supercomputer performance for climate simulation and modelling (million-fold boost or better). We seek to gain far better understanding and predictive insight than current state-of-the-art computing facilities allow. Such forward knowledge, as is well established, is vital to decision making on how to avoid, mitigate, or abate global climate change. Simundo computers will not directly solve global warming, but they will help inform and enlighten, and ultimately develop and verify schemes for stabilising the climate (if necessary). Then, it is up to us all to enact real change.
Climate change is probably the toughest and most complex problem ever faced in modern times and we’re rising to that challenge with gusto. Simundo’s first act is to throw down the gauntlet, be polemical, challenge, and inspire. To this end we have started by asking what we believe to be the right questions and providing some prototypical solutions or blueprints as answers. The computer or Supermodel we propose is just that, a stake in the ground that says: here we are, this is how we can do it, now let’s put our heads together and do something even better! Insanely ambitious and wildly optimistic on the one hand, yet totally practical and down-to-earth on the other.
Simundo is first and foremost a benevolent organisation. And it is vital that Simundo is seen to be so in order that its climate science attains and maintains legitimacy, traction, and the appropriate influence. Transparency, openness, impartiality, and accountability are key. Our goals very much include engaging as broad an audience as possible, to popularise not only the “cool stuff”, but also the real underlying issues. We’d like to engender responsibility and, above all, an active, informed choice of the kind of future this planet will enjoy. Everybody everywhere shares one ecosystem, atmosphere, and its climate. Is there any other way to solve global climate challenges but to come together internationally and cooperate? Join us and help!

Why should it matter that we know about ocean currents and cloud patterns?

There is a myriad of factors that affect weather and climate. Amongst the most important are clouds which affect the earth’s albedo (reflection of the sun’s energy back in to space), and ocean currents which shunt tremendous amounts of thermal energy around the globe. We mention these because they both require high-resolution grids and advanced modelling to resolve properly or at all. They need absolutely humongous amounts of computing power, which is not even on the horizon using conventional supercomputers. Without this level of model complexity and performance any predictions are seriously compromised. Simundo aims to solve this problem and more!

The Supermodels

What is a Supercomputer anyway?

A supercomputer is a really fast computer, simply any computer that is at the forefront of processing performance now (your smart phone today has more oomph than a 1980’s supercomputer). At the time of writing (August 2010), the top two are Oak Ridge National Laboratory’s Jaguar and China’sNebulae. The term supercomputer was first used for Seymour Cray’s designs in the 1960’s and, later, the company and computer that famously bore his name. Those machines where wardrobe-sized. More recently supercomputers have become massively-parallel, warehouse-size installations most often using COTS (commodity off the shelf) processors from Intel, IBM, AMD, or NVIDIA—the same gubbins as at the heart of your PC or game box. Parallel now means getting tens of thousands of CPUs and orchestrating them as a single big system. In 2008 the best of these machines attained 1 PFLOPS (Peta FLoating point Operations Per Second), or 1,000,000,000,000,000 arithmetic operations every second, though not on climate simulation code.
Simundo relies on the same concept of parallelism (divide and conquer), except will use millions or perhaps billions of extremely fast processors. Simundo’s goal is machines reaching 1 ZFLOPS (Zetta-class), or 1,000,000,000,000,000,000,000 arithmetic operations per second within 5 years. This achievement would be around 25 years “ahead of the curve”.

What kind of computers do we need for climate modelling?

This is the exact question that prompted the seminal idea for Simundo back in 2007 and the answer, prima facie, has to be something much like the earth. The idea was simply to make the model a similar physical shape to the globe, or more accurately, like the thin shell that constitutes the ecosystem. Form fits function. In short, the Simundo supercomputer is arranged as a geodesic, spherical mesh of millions of little computing elements. It turns out that as computers get faster and faster, this will be the only way to make earth simulators sensibly and efficiently.
Think of climate modelling as a mesh of interacting unit blocks or cells each one representing a chunk of atmosphere, sea, or land. Each cell emulates the behaviour of its real counter part by modelling its physics, chemistry, life, and so forth. In this virtual modelling world, cells are hardware processors and software processes.
In the real world, nearly all climate events and processes are local phenomenon—sunlight is a notable exception. Each area, or model cell affects it’s neighbourhood or local environment, but does not directly affect those cells on the other side of the planet (not instantly and only by affecting cells in between). Hot air heats up the cooler air next to it and that heats the next, and so on. Ocean currents flow from one place to the next, and do not suddenly jump from the Pacific to the Atlantic.
Supercomputers are getting faster and more parallel (more CPUs meshed together) and the time step it takes to calculate the behaviour of each cell is becoming vanishingly small. Simundo computers might achieve such steps on the order of nanoseconds. Between each computational step, every cell has to inform its neighbours what’s going on within it—that is, cells need to communicate. At the billionth of a second level (nanoseconds, gigahertz), slowness of light dictates that the distance between neighbouring cells and processors cannot exceed a few centimetres worst case without seriously slamming the breaks on performance. Therefore, the supercomputer needs to get physically smaller and processors carefully arranged so that they are placed right next to their logical neighbours. By necessity then, a physical analogue of the real world is indicated: ergo geode!
From an application point of view, processing steps must get shorter because as we increase spatial resolution (many more smaller cells), behaviours in simulated time get quicker with respect to each cell—wind transporting mass across the cell for instance: the smaller the cells, the quicker the transit time. So, shorter time steps mean that more make up each simulated day, therefore more computer operations are involved, and more supercomputer performance is required. Now, this is worse than a double whammy since the number of cells goes up as the cube of resolution too (Cartesian 3D space). In short, as resolution doubles, the required computing power generally goes up by the forth power or by a factor of 16! By extension, getting to the fine resolution of 1km-size cells from the 100km to 1,000km of today’s models requires absolutely extraordinary levels of performance: somewhere in the millions to billions times current supercomputer capacity.

What is a Supermodel and what makes it different and special?

Our supermodel is a tongue-in-cheek term for the combination of new-breed supercomputer and simulation models and represents the very best synthesis of both worlds. With luck, supermodels will help us find the best and most beautiful of all possible worlds, in part by revealing (and avoiding) any of our potential worst futures. A list of salient hardware features that make Simundo computers very different from machines today follows below. More details can be found on the Supermodel pages…
•    Physical shape, configuration and size
•    Cooling efficiency and performance
•    Communications structure, and speed/bandwidth performance
•    Clocking and synchronisation philosophy
•    Repair speed and maintenance strategy

Why can’t we just do this on distributed PCs like BOINC & Climateprediction.net?

For the exact same reasons today’s supercomputer technology can’t cut it:
1.    Speed of light;
2.    Bangs per Watt.
Firstly, most high-performance computing networks use Commodity-Off-the-Shelf (COTS) chips that are mass-produced for video games and as personal computer CPUs. They are designed to have the best performance per chip/CPU, then best bangs-per-buck (performance/$) ratio where the number cores/chips/CPUs is very small. These designs have contributed to the success of BOINC, Climateprediction.net and many other great projects. Note that the bangs-per-buck are particularly good for BOINC since, in essence, the computers are “free”. The wide-area networks of BOINC PCs, are conceptually very similar to today’s supercomputers like IBM’s Blue Gene, which uses thousands of PC-like computer boards in racks cabinets and vast rooms, hooked up by a high-speed local-area network. So, the speed of light problem, here, is the communications network that hooks up processors. For computer rooms the interconnect distances are typically less than 100m, for BOINC it can easily be thousands of kilometres.
The problem with running on essentially isolated, single processors is that only “one” processor can work on the job. Each job or scenario (a member of an “ensemble“) is a serial process—yesterday’s weather effects today’s, effects tomorrow’s—and cannot be diced up into pieces and distributed because of a absolutely unassailable problem: the communication time of any intermediate result (i.e. today’s conditions) is much, much longer than the processing time to the next step. In other words, there’s no point to distributing the processing of one scenario to different desktop PCs since a super-sluggish Internet will just get in the way and slow things right down. The “isolation” of these processors comes ultimately from the speed of light at 300,000 km/s—for data to get 300km to another PC takes at the very least 1 millisecond (an eternity for electronics). Now consider that we’re talking about building computers that would be perhaps more than a billion times faster than your desktop. What might take a minute on a Simundo machine would take 2,000 years on yours. No surprise then, that BOINC-style computing suffers similar, but even more exacerbated problems compared to classical supercomputers when we look ahead a little to supermodeling. Remember, there is very little point in getting predictions slower than realtime (after things actually happen)!
Secondly, PCs at home are in reality very far from free. Even without considering capital costs and environmental footprints, they consume power and plenty of it when working hard: roughly on the order of 200W. Finding 200 gigawatts anywhere in the world to power a scaled-up supercomputer would be a ridiculous proposition costing millions of dollars an hour in electricity bills alone (getting on or $100bn/yr!). Absurd not least because the inefficiency of such a beast may contribute to climate change. Indeed, power consumption of computers today as a percentage all power consumed globally, is quite significant. Clearly, something in this picture has to change radically, namely bangs-per-watt. One of the greatest design challenges for Simundo is finding much, much lower-power computing technologies. Already, some promising ideas are on the table and we will be strongly encouraging people to put on their thinking caps and crack this nut!

How much will all this cost? Sounds expensive…

We are looking at between $10bn and $100bn in order to fully realise the Simundo project. Quite simply, with more money we will see better results (faster and more accurate climate models). Our early estimates for costs currently put each Zetta computer at between $3bn and $10bn (2010 USD) with several of these to be installed around the world. We foresee design, infrastructure, and software costs would be significant but not dominant. Running costs in terms of electrical power consumption, management, maintenance and so forth would be on the order of $2bn a year. See the finance page. These estimates are clearly early stage and represent a synthesis of what we believe should, could, and needs to be spent. There exists, certainly, a lower bound for the project at which development costs begin to be overbearing and where, also, we’d simply be better off with “conventional” supercomputers. A guestimate for that lower bound is perhaps $1bn.
Expensive, yes! Some perspective: even at the high-water mark of $100bn, the Simundo project would cost approximately the same as the International Space Station. That would be a tiny fraction of the costs for the F-35 fighter program or a fraction of one percent of 2007-2010 global bailout programs. $20bn a year over five years is a tiny 0.034% of the global GDP ($58 trillion in 2009), and is around half what folk in the USA spend on haircare products per year. Each computer would cost about the same as the latest Mars Rover, or a fraction of what people frivolously spend globally on bottled water every year. Yep, cheap at the price!
Again, see the finance page for more reference points.

What kind of timeline do you project to reach a finished model?

From kick off, which is to say political and financial support is in place, we’d like to think about 5 years to the first Simundo Zetta computer and then perhaps 10 years to the Yotta version. Within a couple of years we would have full virtual models (simulations of simulations!) of what we’re going to build. From there, implementation, manufacturing, construction, and commissioning would take about a further 3 years. This is an extremely aggressive timescale.
Simundo should be easier, cheaper, and faster than going to the moon was, or even going back to the moon again would be.

Can Simundo really beat world's biggest and best computer companies?

Simundo does not aim to. The idea is not competitive but rather an open collaboration where the aim is to include all and everyone that can best contribute. But, yes it is possible and it has been done before.

What other stuff do we need to predict climate changes?

Simundo computers, and supermodels—or something like it—are a necessary, but not sufficient element for global climate prediction. Of course, climate science itself (our knowledge and understanding) will need significant improvement. To a great extent Simundo goes hand-in-hand with that. However, in order to know what’s going to happen to the planet, we need to have a really good idea of the state it’s in now and as much information about the past as possible (initial value problem and historical record are key). To achieve that, we need better monitoring and measurement facilities. Investment in space satellites and earth-based metrology is vital, and some of that is happening already. For instance the European Space Agency’s Living Planet Program, and the diverse NASA’s missions are amazing, but we can still do better.

Could such a major advance in technology be manufactured any time soon?

Simundo is not predicated on technological miracles, but rather the very best of what’s available now or what we reasonably expect can be built within the design timeframe. Chips are still getting better according to the famous Moore’s Law, and we can pretty much count on that continuing for a few years. Clever folk are constantly having great ideas. Progress is relentless. However, what we envisage is beyond that: a massive and concerted effort not to up our game incrementally, but to change it radically. This, of course, takes money, people, as well as a significant shift in mindset. We see the world’s brightest and best clubbing together to advance design and technology well beyond current roadmaps. Things like: architecting designs to process climate data most effectively and use much less power (current computers are incredibly wasteful); better software algorithms, and code optimisation; optical synchronisation and communications techniques… All these things and many more will be important (though not necessarily crucial) and Simundo will benefit from accelerated development of them. Then there’s the obvious: spend a lot more money on designing and building the computer, buy more processors and more performance. We would like to see a project budget of at least $10bn and more like $100bn if things go well— around a hundred to a thousand times what has previously been spent on any single supercomputer. Check out our finance page to put these numbers in some perspective.

Where do you think it is possible to build and place such devices?

Sophisticated technology is manufactured around the world, in the USA, Asia, and in Europe predominantly. Manufacturing other components and systems is also globalised except for the odd, very specialised component. The issue here is one predominantly of design, funding, and then gaining access to the most sophisticated manufacturing processes wherever they are. This is clearly a massive project that will require international cooperation to do well. Certainly a single rich country could fund and support what we have in mind, but we’d rather see every country involved with facilities replicated with at least one on every continent. Choosing the actual installation sites may have a few technical constraints (e.g. power and communications infrastructure, perhaps cooling water supply), but the locations chosen will ultimately be a political decision—and hopefully decisions faster and a little less contentious than ITER!

What if we spend all that money and resources and then the whole thing fails?

What happens if we don’t? Let the popular Greg Craven talk to this… Again, a decent sense of perspective is useful here. We are taking a highly educated bet on the future of both technology and the planet when dedicating large resources to investigate and develop them. $100bn over 5 years would indeed be a sizable wager, but one with quantifiable risks and very tangible ancillary benefits. Compare this with the far larger amount of money people gamble on the horses every year where the odds are stacked, and the net benefits dubious. In the UK alone, gambling is a $100bn/yr business or around 4% of GDP.

Isn’t it cheaper to improve existing supercomputers than go to all that trouble?

We are not throwing away existing knowledge and experience, but building the best possible on top of it; standing on the shoulders of giants. However, we already know there are very significant optimisations that will yield higher levels of performance and drastically lower power consumption. Much of today’s “bloat” can be cut away to make lean, mean machines more specifically targeted toward climate modelling. One example of this being done for a particular application set, molecular dynamics, was the RIKEN MDGRAPE-3. See Million-fold boost: how? So, noit wouldn’t be cheaper unless the incumbents drastically up their game.

What kind of timeline do you project in terms of reaching a finished model?

From kick off, which is to say political and financial support is in place, we’d like to think about 5 years to the first Simundo Zetta computer and then perhaps 10 years to a Yotta version. Within a couple of years we would have full virtual models (simulations of simulations!) of what we’re going to build. From there, implementation, verification, software, and construction would take about a further 3 years. This is an extremely aggressive timescale.

What branches of sciences, technology & engineering will this project involved?

All and every. As many as possible. Absolutely no holds barred. We want the very best of what we can squeeze from industry, academia, and anyone that wants to lend a hand or their brain.

What more do we need to predict climate changes?

Simundo computers, and supermodels—or something like it—are a necessary, but not sufficient element for global climate prediction. Of course, climate science itself (our knowledge and understanding) will need significant improvement. To a great extent Simundo goes hand-in-hand with that. However, in order to know what’s going to happen to the planet, we need to have a really good idea of the state it’s in now. Over time, we also need to verify models against reality and the more accurate our picture of reality the better. To achieve this, we need far better monitoring and measurement facilities. Investment in space satellites and earth-based metrology is vital.

What other benefits and applications do you foresee for such a project?

Simundo “supermodel” computers would be only somewhat specialised toward climate simulation and weather forecasting—that is to say they’re envisaged to be very much “general-purpose” machines. So actually, they’d be fantastic at doing all kinds of other important work across many disciplines such as medicine, astronomy, physics, chemistry, economics, communications, genetics, art, gaming, and so forth. Specific examples include protein folding, the cancer genome challenge, asteroid finding/tracking, particle accelerator data processing, brain simulation, cinematic visualisation and rendering. The computer would be more than powerful enough to be a hub for “cloud computing”, allowing everyone on the planet to have a virtual server better than today’s best PCs. It might conceivably simultaneously process all Internet traffic and cloud services. It’s likely to be the first machine with more “computing power” than the human brain.

What about the downside and opportunity for misuse?

Any technology can be used for good or ill, and supercomputers are no exception. Many of the fastest and most expensive supercomputers were funded and built to simulate nuclear bombs (circumventing test ban treaties), for spying, designing fighter planes, and other nefarious applications. We maintain a very different agenda. The charter of Simundo does, like CERN for instance, specifically prohibit application of its facilities in pursuance of any national interest at the cost of others, pursuing military ends and weapons programs. While we freely acknowledge that it is both unlikely and improper that Simundo redefine global politics, the corollary must also be true: politics will not undermine Simundo’s humanitarian and impartial, scientific stance.

Isn’t there a risk in developing something that could outsmart us?

…and take over?! Thinking Asimov’s “I, Robot” here? Well, the answer is yes, and no! Firstly, Simundo supermodels are to be an accurate simulation of the real world and as such should be no threat at all. In other words, climate supermodels will be fairly “dumb” physical models that do not affect us physically—a simulated, virtual typhoon will surely be as harmless as crashing Flight Simulator at your desk. The risk of such machines becoming self-aware and having independent thought is of course worth contemplating. And many futurists do believe this will happen within a few decades. Simundo may accelerate the potential, but ultimately humans control what these machines do, what software gets run, and if the machines can have any external effect on the world or can cause harm – at least until they can really, really outwit us…

FAQ toys, factoids, other stuff and trivia…

Million, billion, trillion, bazillion, squintillion, googolplex… What?

Numbers are important, they really, really do matter. In fact, numerical solutions are the only way the behaviour of climate systems’ chaos and complexity can even be approached or quantified. Dealing with the whole planet in global simulations, leads to some astronomical numbers in terms of the computing power required. Yet, big numbers are often way beyond humans’ material experience and just don’t make much sense. In recent news, “trillions” were bandied about like pocket change. Someone once said: “a billion here, a billion there, pretty soon you’re talking real money”… OK, so the way to appreciate big (and small) numbers is to compare it against another big thing that we understand, and to think about orders of magnitude.
What does a Zetta-class machine mean in relation to the earth? In your minds eye, imagine that each FLOP wasn’t some arithmetic operation, but instead printed a dollar bill. Now, a million, $1,000,000, in one-dollar notes has a volume of about a cubic metre (new bills stacked neatly) and weighs about a ton, and is the size of a fridge. Zetta would print pretty fast: in one second the whole planet would be covered, and we’d be over our heads in solid cash (about 2 metres deep). WOW! But that begins to make sense, right? We need many operations to be applied to the atmosphere in little parcels in every tick which all has to be done in much less than one second and over and over again… Are we loosing you now?! Well, in under two weeks Zetta would print the entire volume of the earth, but now that’s getting impossible to imagine (even when you’re used to “quantitative easing”, pleasing, teasing or whatever) — impossible even for the treasury secretary.
Perhaps what’s the most truly astounding ratio is that in well under a century of technological advancement, the programmable computer will have progressed from Zuse to Zetta, from 1 FLOPS (1938) to 1,000,000,000,000,000,000,000 FLOPS (2016?). Obviously there’s plenty of scope for rapid improvement when we apply our ingenuity!
Yeah, and we were joking about bazillion, squintillion, and googolplex… But, OK to round things off, here’s a model world that took around 2 billion seconds (500,000 man hours) to construct: