Why you need intelligent people to create AI.


by Chaim Sajnovsky, www.b7dev.com manager


Usually we read news and articles about the latest developments for companies adding the secret sauce of Artificial Intelligence into their products and services and hitting the bull’s eye of success. And success means money, IPO’s, press coverage and a whole new world of happiness and joy.

At the same time, we (engineers and developers) receive requests for adding this golden gift of AI to all kind of projects. It seems, AI complex algorithms are as easy to deploy and integrate as piece of cake, and it’s done and applied in a breeze.

Well, they’re not. AI implies doing the proper research, hours and hours of experiments and evaluations, and most important, the engineering team should be composed of intelligent (no pun intended) people.

The team can be dragged for days and days on never ending riddles up to some logical idea comes to light and mark the right path to follow.

Someone should lead the way to the logic and not so evident details. Let me tell you a short story…

Maybe some of you are aware of the works of Abraham Wald, a renowned mathematician born back then at the Austro-Hungarian empire at 1902.

Young Abraham Wald

He fled from Austria when Nazis arrived at 1938, to the US. Then, he found a job as a member of the Statistical Research Group (SRG) where he applied his statistical skills to various wartime problems. These included methods of sequential analysis and sampling inspection.

One of the problems that the SRG worked on was to examine the distribution of damage to aircraft to provide advice on how to minimize bomber losses to enemy fire. There was a inclination within the military to consider providing greater protection to parts that received more damage but Wald made the assumption that damage must be more uniformly distributed and that the aircraft that did return or show up in the samples were hit in the less vulnerable parts. The results are often popularly simplified into the idea that Wald suggested greater protection for the fuselage and tail even though the available evidence showed damage mainly on the wings. Wald noted that the study only considered the aircraft that had survived their missions—the bombers that had been shot down were not present for the damage assessment. The holes in the returning aircraft, then, represented areas where a bomber could take damage and still return home safely. Wald proposed that the Navy instead reinforce the areas where the returning aircraft were unscathed, since those were the areas that, if hit, would cause the plane to be lost. His work is considered seminal in the then-fledgling discipline of operational research.

Air force people came to Wald in order to get some kind of obscure calculation made by a scientist about what parts to reinforce. Wald, a really clever person, deducted something simpler without a fancy formula: show me from those RETURNING planes where they ARE NOT hit. Those are the most fragile areas.

The Wald example shows why you need at your AI or Data science team most of anything, intelligent people. Because AI intends to emulate real intelligence processes but in a computer. And the most expensive cost in the team is that clever guys that without being mathematicians or expert programmers necessarily (they may be math, stats or dev guys too), know how to avoid those common bias at the analysis time, getting that “oh” moment that will define the project success.

One of the biggest issues with AI is the problem of “false positives”, meaning, results that seem to match with the proposed hypothesis, but they don’t have any logical relationship with it.

Those cases require a new take, usually filtering at a different level, that requires itself someone noticing the little details.

I’m not telling that AI development teams should not be comprised of Data science doctors, statistics specialists and top-notch developers, I just think the existence of plain clever team members is a must.

Implementing AI uses an ever growing array of techniques that is always evolving, like a set of tools for a builder. There is not a magic bullet that you will apply on each case and will work as a charm. Seasoned teams experiment with several angles of attacking a problem up to the point they get the proper one. More than usual, it takes a lot of trial and error, refining the hypothesis you try to proof. Again, coming with fresh or challenging ideas it’s not a task you learn at the college, but a skill that just the right people have.

When you get in touch with a company or consulting agency doing AI, try to enquire more about wits and not about theoretical knowledge. Making the right questions it’s the key to getting the right results.