Last time we talked about A.I. having a go at social media, it wasn’t exactly pleasant. Microsoft pulled its racist bot from servers within just a few days of launch, but techies have been eager to bring machine learning back to one of the most used platforms on the internet. Well, researchers from The Ohio State University—in Sage’s back yard, no less—teamed up with FireEye and Leidos companies to spin an algorithm of a different kind for just that reason. Only this A.I. bot isn’t chatting with Twitter users; it’s searching for data-security conversations.
Big Brother is doing what?
We’re not talking dystopian fiction here. It’s really just using A.I. for something it’s perfectly designed to do: find patterns. We’ve talked before about how A.I. is really only good at things humans, frankly, suck at. The algorithm in question is designed to sniff out any tweet where cybersecurity issues are mentioned—and that’s a lot of characters to sift through.
This is an ideal application for A.I. since data security, at its core, is just like any other human endeavor—it can be foiled. Plug machine learning into the twitter-verse and it can yield a host of patterns with far more accuracy than any carbon-based employee ever could.
This time will be better, really.
Basing A.I. on non-cultural cues seems to yield better results than Microsoft’s first attempt. We are all just too screwed up to collectively educate machine learning without creating a monster. So, instead, let’s turn it loose on formatting errors and phishing schemes.
What isn’t said about this research project, however, is that cyberthreats are really just social engineering—rather than social media. In fact, we saw an example of this with one of our recent hires—as she received a phony email from our CEO asking for a favor. These kinds of schemes are used to get corporate passwords and account numbers.
A.I. is perfect for preening emails for awkward grammar and spurious requests—a tell-tale sign that someone is up to no good. And while some degree of training can help employees avoid taking the bait with these kinds of emails, A.I. could eliminate the threat altogether.
Have you come across a cyberthreat that you think A.I. wouldn’t be able to handle? Or, do you already use machine learning to prevent these kinds of pitfalls? Join us on Facebook or LinkedIn to add to the conversation—we’d love to hear from you!
Image ©: Smart Data Collective