Future Tech

Can platform-wide AI ever fit into enterprise security?

Tan KW
Publish date: Tue, 18 Jun 2024, 05:29 AM
Tan KW
0 449,169
Future Tech

Opinion AI - loud, confident, and wrong. That's not talking about generative AI's ability to hallucinate, although why not? Rather, it's about the big picture, the platform-wide Recall from Microsoft and, oh dear, Apple Intelligence.

Both companies seem a bit annoyed that not everyone likes the idea of being watched by an interventionist robot. Microsoft rowed back a lot on Recall, while Elon Musk simply hates the Apple stuff. This from a man who wants to put chips in our heads. This time, heaven help us, he might be right.

This iteration of AI across the platform just cannot fit into the enterprise. Here, handling other people's data is highly regulated, at least in the EU and for those who handle EU-sourced data. Competence and compliance have to be demonstrated. Best-practice engineering and sector-appropriate protocols are the order of the day.

It's hard, expensive work that too often lags behind the threats, yet it has cohesion. Encryption of data in flight and at rest, servers, management, and endpoint apps secured by well-engineered stacks in protected memory are the basics on which workable security is built.

If only humans could be engineered out. As yet, no update has been developed that lets humans directly process or source encrypted data. At the moment it enters the eyeball or leaves the finger, it has to be in the clear, as vulnerable to evil as the quaking vole caught out of its burrow by the stern raptor's gaze. Many attacks in history have succeeded through tampered keyboards or remote spying on monitors and other endpoints.

This analog hole is ultimately unpluggable, but it can be minimized. People may need data in the clear, but they also need it in tiny chunks. The leaky endpoint has been and will always be a problem, but it's mostly one of handling humans. Platform-wide AI changes that equation in a way nobody foresaw.

You can define and design everything your secured data touches inside your systems, and where it has to move externally you can wrap it in solid crypto if you want to apply analytics.

None of this is perfect. Supply side attacks, misconfigurations, and bodged updates are details to delight any devil. It's still well-designed, competent, and defensible practice.

Now add platform-wide AI. It has two jobs. One is to pretend to be human to apps, absorbing data in the clear as it is available. The other job is to appear to be superhuman to the user, magically synthesizing suggestions and insights, taking the reins through massive data crunching. Massive data crunching means massive data, the more the merrier.

This does not sit well on the secure stack. In principle and so far in practice, it's a voracious consumer of data, invisible to the apps that source it, coupled with an analytics engine that does things that may be unpredictable and unfathomable, and every so often squirting who knows what out to a secret cloud. You're expected to demonstrate compliance with that monster prowling the platform? There's no per-process security setting in the analog hole.

It doesn't matter how much the vendors claim their AI is secure, that almost everything is processed in-device and only anonymized, encrypted data will flow out to oh-so-secure cloud devices that will forget everything in microseconds. Platform-wide AI is smeared like honey across the top of the stack, and we only have their word for it that it's ant-proof.

Even if everything was completely true, you may not want it. You may not want the intensive in-device processing eating up your energy and clock cycles. You may not want to write a compliance report that boils down to relying on the marketing claims of a new and unproven technology weaponized like photon torpedoes in the latest Big Tech War. You may want to keep control.

No matter how you draw the diagram, you can't. Microsoft may have made Recall opt-in instead of opt-out - what on Earth were you thinking, guys? - but the tech's still waiting in the wings, they want you to use it, and don't tell me you've never seen an update mysteriously re-enable a feature you thought was six feet under.

There is simply no place in the security stack for the platform-wide AI on offer now. AI can and hopefully should have a place in security, but only when its job is defined, its access completely under control, and its design and behavior demonstrably secure.

It should go without saying, because it's been said for so many decades, that if you want a platform that isn't in thrall to the very worst urges of multibillion-dollar gambles, then FOSS is there for you. Enterprise security is about knowledge and control. Platform-wide AI gives you neither, but there's a path that gives you both. ®

 

https://www.theregister.com//2024/06/17/opinion_platformwide_ai_security/

Discussions
Be the first to like this. Showing 0 of 0 comments

Post a Comment