A few short months ago…
In November, I was preparing for a presentation at Gener8or, hosted by Capitol Records in Los Angeles, and decided to do something a little different.
I decided to add in an extra slide to see what the audience though about an idea that I had been joking about with my team for months. I have always considered AI to be an overused term, and if it wasn’t that, it was certainly misused by marketers to sell basic sorting algorithms.
Machine learning certainly had plenty of viable uses for us. But AI? Should we call it that?
It felt like a bit of a stretch to call algorithmic sorting AI, and it still is one. It’s not that I dislike the term, because it sounds great — a CEO of an advertising tech firm has to know what looks good on paper, right? My issue with the term was more about how the term is being misapplied left and right to low-ambition ideas.
I just couldn’t BS our way into AI — there had to be something substantial that would be worthy of the title “Artificial Intelligence.”
I also would have a very hard time facing the world with a half baked offering… like raising an AdderCoin ICO or AI Budgeting or some other shallow cash grab on tech that truly deserves more than what some are doing with it. This overuse/misuse of a term that means so much to me is why I looked at Adder AI as a bit of a meme in our early development.
Surely we didn’t have any business making autonomous machine learning algorithms? Or did we?
Meet George Jetson
Yes, we did. And we still do. That’s the answer. Keep reading, though.
Fast forward to two weeks ago, when NVIDIA announced the early release of the JETSON NANO (something tells me this was supposed to be named the Jensen, after their increasingly swanky CEO, but marketing stepped in).
We were already excited about the new ‘microcontroller’, if you could call it that. Pi and Arduino fans would likely shudder at the specs on the $99 board though. This is more like a very small, low power draw system on module (SOM) neuroprocessing unit.
For my hardware nerds… go buy one of these ASAP. We got ours early because of a special perk in the NVIDIA CUDA Developer Program (translated: they saw how much we’ve spent on their web store for our research), but they should be available soon if they aren’t already.
In muggle speak, this device is a baby AI brain that hasn’t learned a thing as of yet. It’s a computer on a chip the size of a deck of cards.
There are two boxes pictured, though. Gracie — George’s… companion, let’s say — has also been initialized. They’ll operate as a binary pair for the simulations we intend to run. We’ve already run several neural network models on these little machines, and have big plans for the future of our little AI brains. Leave a comment if you see where I’m going with this 😉 .
Y’know what we say — we’ve been busy.
I’ll write more on how we got started with programming of these and into AI training when we sit down for an interview with our mighty pair of miniature processors. The exciting tell-all interview is coming soon!
Until then — check out another one of our articles here!