About

Hi, I’m Steven Mascaro. You may remember me from — actually, never mind that. In brief, I’m a researcher, consultant and programmer with a keen interest in causal modelling (with Bayesian networks), simulations and causality.

I keep this blog mostly to talk about causal Bayesian reasoning. I’m aiming to collect together a set of simple ideas, explanations and practical tips on causal thinking and Bayesian reasoning — which I think are very much entangled — that are accessible, helpful and useful to researchers, modellers and anyone interested. Many posts will be on topics that are well understood, but that I’m hoping to make more accessible. (And even if I reach and help only a handful of people, that would make doing this blog worthwhile.) Sometimes I’ll also explore some things which are much more exploratory and experimental. I’ll try to make it clear which ones are which.

As I reboot my blog in January 2024, generative AI has dipped a toe, and perhaps more, into the pool of general intelligence. This may seem a world away from the topic of this blog, but generative AI works so well due its ability to make really great estimates of conditional probabilities in a lot of useful scenarios. This is a lot like what Bayesian networks try to do on a smaller scale. Generative AI works exceptionally well, but I am concerned: we have little idea how the networks encode the structure of those conditional probabilities. The structure isn’t clear and isolable; instead, it’s spread out across an enormous matrix of weights that no human can ever hope to natively read. (And no, there are no Neos.) If humans have any hope of understanding that structure, we need to create models that humans can understand. Generative AI models aren’t it. I would argue causal Bayesian models are our best chance, although there’s no reason why generative AI can’t help build those models. I may even post about this on occasion.

Prior to generative AI, technology was taking an increasing amount of the responsibility for decision-making. Of course, people would specify the rules by which technology would work; and technology would execute it. But those rules were already increasing in their conditionality, and often abstracting and mixing those conditions in non-obvious ways. Nonetheless, we could say they operated within relatively clearly defined boundaries, mostly due to necessity. With generative AI, those clearly defined boundaries aren’t necessary any longer. It’s still really early days, but a good example of this is Auto-GPT.

Obviously, I don’t think the increasing responsibility of technology in decision-making is a bad thing — after all, I work here, and much of the work we do aims to either support decision-making or automate aspects of it. But, as I wrote in 2013: I do think we need to recognise how crucial a role technology increasingly plays and we need to discuss what this means (beyond dystopian or utopian science fiction) and decide how we want it to work. Now, it’s just a bit more urgent.

Leave a Reply

Your email address will not be published. Required fields are marked *