Three huge issues with Microsoft’s approach to AI for intelligence and global security professionals

Andy Jenkinson, a recent connection of mine on LinkedIn, laid out not long ago — quite plainly for all to see — the systemic weaknesses in Microsoft’s approach to the race to introduce AI into the fields of intelligence and global security. But we shouldn’t blame Microsoft. The blame belongs to each and every company the size of Microsoft, which participates in the same business activities: the blame lies on the shoulders of everyone in the same sector.

In his article, Andy points out the following:

Microsoft has developed artificial intelligence that can be used by American spies, deeming it safe because it is completely divorced from the Internet…

Microsoft said it wanted to deliver a truly secure system to the US intelligence community.

William Chappell Microsoft’s chief technology officer for strategic missions and technology, said the system has an “air-gapped” environment that is isolated from the Internet.

Whilst the 'system' may be air gapped, of course, Microsoft servers are certainly not, nor are they secure...


Here, meanwhile, from Tim Brandom amongst many others reporting two months ago or so, we have stories about how Russian hackers gained access to Microsoft systems; an access the company admits it is still not clear it knows how to shake off:

Microsoft Corporation says it still cannot shake Russian hackers who compromised several email accounts belonging to company executives.

Midnight Blizzard — the group named by Microsoft as responsible for ongoing cyber attacks on their digital infrastructure — has reportedly used information obtained in the first successful hack to broaden its scope.

"In recent weeks, we have seen evidence that Midnight Blizzard is using information initially exfiltrated from our corporate email systems to gain, or attempt to gain, unauthorized access," the Microsoft Security Response Center said in a statement. "This has included access to some of the company’s source code repositories and internal systems.“

It’s surely clear enough by now — in light of both the above and many other events we’ve been notified of lately — that what needs delivering is not the airgapping of systems to be used in mission-critical and real-time operational situations for intelligence and global security, but something far more fundamental: the re-architecturing of existent operating and other software systems from the ground up, currently being shoehorned as bespoke solutions when in actual fact they’re cobbled-together off-the-peg.

Such procedures and sales & marketing strategies just lead, inevitably, to the kind of vulnerabilities that an aggressive machine-primacy approach (see 9/11, Putin’s Russia and its longitudinal strategising against interests of the West, Hamas last October 2023, and so forth) inevitably fosters:

More examples of what a reliance on machines, promoted by the security industry for the benefit of its own already deep pockets, can deliver:

  • https://www.secrecy.plus/spt-it / http://spt-it.com | on the weaponisation of ourselves by bad-actor hackers and nation-states, as a direct result of the vulnerabilities that our ancient, tweaked, and re-tweaked operating systems and software platforms wilfully promote

And what we need instead:

Here, then, in a list of three items, are my objections and recommendations to those clients, like perhaps yourself, in the fields of security and intelligence, which currently find themselves obliged to work with a sector that self-interestedly sustains a machine primacy at the expense of all other approaches: a machine primacy that fails to play to law-enforcement, security, intelligence and military personnel’s manifest strengths in the spaces of operational and battlefield capabilities re intuition, arational thinking, high-level domain expertise, and gut feeling:

  1. Microsoft’s lazy bandying about of terminology like airgapping to convince the rest of us that the systemically insecure can be isolated from vulnerability is sheer bollocks. The solution doesn’t lie in isolating the bad, but systematically rethinking this bad from scratch and making it newly secure by design, and from beginning to end. The real problem being that once upon a time, in the early days, the systems such companies sold were secure: the admin was the user and the user was the admin. But from then to now, via Internet connectivity and the locating of hard drives on other people’s computers in the cloud … well … it’s hardly surprising only disaster is ours to embrace:

    https://gb2earth.com/pgtps/genesis | an example from several years ago now of a new kind of operating-system architecture, for a newly secure intelligence, military, and related

  2. But Microsoft et al’s philosophies, not just their praxis, are also to be taken issue with firmly here: the first challenge is that they deeply, unshakeably believe that applying AI to the immensely intuitive and human-heavy fields of intelligence, espionage, and the military and related, needs more of the same. No matter that 9/11 happened with the best machine-primacy tech in place; Putin’s Russia’s nonconformist individualism longitudinally strategises Western team-based approaches out the ballpark; Hamas last October 2023 happened under the very noses of surely the most machine-surveilled location on the planet; and a handful of evil men from Islamic State in Moscow killed hundreds of citizens in a Russia that prides itself on its repressive infrastructures and cultures.

    Well. I’m sorry. 9/11 was a surprise out of the blue. Let’s take that one out of the frame. But machines which only see the future on the basis of past patterns will never beat hands-down those evil humans who are capable of an extreme creative criminality which creates futures on the basis of the newly future, and on the biggest of occasions. And this, no one in Big Tech has ever, to date, despite the examples given, taken on the chin … now has it?

3. If we really want to deliver an AI fit for intelligence, global security, military operations and even law-enforcement contexts, we have to re-engineer how we see the place of intuition, arational thinking, high-level domain expertise and gut feeling in such fields. Microsoft et al simply aren’t even contemplating, never mind discussing, this.

In such fields we should debate far more openly, for the good of a wider democracy I dare say too. And in relation to IT tech’s historical presumptions, without doubt:

Click to download the full “cognitive warfare and intuition” slide-deck

Finally …

If you would like to find out more about how GB 2 Earth can help you develop intuition-friendly architectures for mission-critical decision-making and real-time high-end operational contexts, please send an email to mil.williams@gb2earth.com , or click through to our Meet page:

Previous
Previous

Serving the people, then; not serving themselves OF the people

Next
Next

The domino theory of #neurodiversity / Those who would remain society's voiceover for the foreseeable