Increase resilience to Global Catastrophic Risks? Or avoid them altogether?

After recently learning about two different paradigms (“risk” and “resilience”) for approaching threats, I am formulating my first thoughts on when we should use one, and when the other way of thinking.

Scalable Randomness

I’ve recently been reading “The Black Swan” (by Nassim Nicholas Taleb) and “How to Predict Everything” (by William Poundstone), and was stunned by their overlap. Granted, interleaving your reading is probably not the best way to steer clear of confusion, but I would literally read a section on Zipf’s law in bed before sleeping toContinueContinue reading “Scalable Randomness”

How good are we at predicting the future?

After getting a bit lost in the question of whether we should in principle be able to have a clue about future ramifications of our actions, I would like to turn back to questions of how well we’re actually doing so far. It looks like, for long-term projections, we are indeed pretty clueless. This document,ContinueContinue reading “How good are we at predicting the future?”

Why are we failing at long-term (catastrophic) risk-assessment?

I’ve noticed being confused about this, but luckily I now have blogging as a go-to tool to deal with confusions. I feel like, overall, there are two types of reason that people cite when they say that we can’t predict long-term catastrophic risks. One is confined to the methods we are currently using  and goesContinueContinue reading “Why are we failing at long-term (catastrophic) risk-assessment?”

AI risk: bullets, bullet-points, and poems

What are the primary risks to society of failure for AI systems, and how can these risks be monitored and addressed at scale?DeepMind Ethics & Society I found myself cringing away from the question, while at the same time dismissing it – a sure sign that I should look at it for at least halfContinueContinue reading “AI risk: bullets, bullet-points, and poems”