Monica Anderson is an AI Epistemologist, Founder of Syntience Inc and Sens.AI, and an Ex-googler.
She has researched Deep Discrete Neuron Networks for natural language understanding since 2001, with some success, and is currently working on launching a cloud-based microservice using an Understanding Machine.
Asif: Tell us about yourself and your company
Monica: I have been working with industrial strength AI since the 1980’s. I’ve
done NLP, big data, and Machine Learning for Google, Kosmix, Kanisa,
and in two of my own companies (Sens.AI and Syntience).
I also created three well-received expert systems for Cisco.
I use Syntience Inc. to manage my research since 2001. Our goal, and
the main result so far is a novel Deep Neural Network algorithm for
human language understanding and generation.
I call it Organic Learning because it starts out as an empty system,
and then, during unsupervised learning from a plain text corpus from
online books, a so-called “Connectome Algorithm” incrementally adds
simplified software equivalents of discrete neurons and synapses
wherever they are needed… in a process resembling organic growth in
This is not an improvement of Deep Learning (DL); it is a fruit of a
different tree. The main difference – the use of discrete “neurons”
rather than using arrays of floating-point numbers as in DL, provides
several amazing advantages. You never need to touch (compute with) a
neuron that isn’t currently in context, which means we can keep
billions (soon trillions) of concepts and hypotheses in memory and
they do not cost you any computation, time, or power. About ten
thousand concepts are the most you ever need to access to understand
what you are reading, and all of them can be trivially found following
synapse-like direct links. Another important advantage over DL is that
in DL, you discover correlations using convolution, but the stochastic
gradient descent phase invalidates this information so you have to
re-discover them again and again. And these convolutions are
expensive. In contrast, when Organic Learning discovers a correlation
between two neurons (concepts or hypotheses), it connects them using a
synapse which stays around for as long as it is helpful. These
mechanisms, and the overall design of the algorithm as a stream-based
understander of deeply semantic sequential data, means that the
algorithm can learn language thousands of times faster than any DL
based approach. And it does all of this without the need for a GPU
You can think of fresh synapses as correlation based hypotheses, and
if they contribute positively to understanding, then they will be
strengthened over time. If not, then they will be discarded. This is
Evolutionary Computation inside a Deep Neural Network. As an aside, we
can think of DL as being roughly congruent with the mammal vision path
(which is why DL does images so well) but OL provides the more ad-hoc
neuron-to-neuron connectivity we find in neocortex which is required
for general language and world understanding.
OL started working well in July 2017 and we’ve been improving and
measuring the algorithm since then. We are ready for pilot engagements
with companies with suitable language understanding problems.
Specifically, we are currently working on a cloud-based microservice I
call an “Understanding Machine” (UM). The API is very simple. A client
sends a request containing some text and the UM will return a set of
“concept node IDs” that can be manipulated and stored as (e.g.) python
sets of numbers. These sets will be “semantically repeatable”, i.e.
insensitive to changes in syntax. If you are building a chatbot, you
can (in the development phase) retrieve a set of concept nodes for
“canonical” statements like “Please show me the balance of my checking
account”. Then, in production, you send all user input given to the
chatbot to the same Understanding Machine and if the user asks to see
their checking account balance, you will get back the same (or very
similar) set of nodes, no matter how the user phrased the request; the
output from the UM is disambiguated to the point where it can be
directly used by your business logic. By delegating language
understanding to our Understanding Machine, you can avoid all NLP work
such as parsing, stemming, typos, elisions, etc, any and all machine
learning, and a lot of analysis of historical customer interaction
data. You still need to test, and, as required, refine your set of
“canonicals” until they cover a sufficient fraction of the user input
space. This will be transformative in any language understanding
Asif: What do you predict will be some of the biggest business opportunities & industries affected by Artificial Intelligence?
Monica: Literally, every branch of business and industry could use full-blown
language understanding. Our UMs could be used for analyzing patents
and contracts, parsing hospital patient records, reading research
reports, classifying web pages by topic, and for user interaction in
systems like SIRI and Alexa. The difference would be like interacting
(likely speaking) with SIRI, versus talking to a competent co-worker.
And there is no doubt in my mind that full language understanding will
be the next vector that moves us closer to recognizably intelligent
machines. I hesitate to label our Understanding Machines as “AI”
simply because they were designed to understand but NOT to reason,
which means they will not ever become fully intelligent… at least
not without significant further advances in AI Epistemology. But there
is billions of dollars worth of business to be made by just
understanding languages, situations, and big chunks of the world as
described in language found in books and on the web.
Asif: What is something you wish someone had told you when you first started your profession?
Monica: I got into the field too early… when we still believed that
“reasoning and behaviors are where it’s at”. I now know that everything
starts with understanding. You cannot reason about that which you do
not understand. And this difference is still not discussed in AI
education anywhere. Everyone getting into the field should know enough
about AI Epistemology to see what they are really trying to do. Nobody
had done the groundwork, so I researched it for a decade and finally
wrote my own primer in AI Epistemology and published it as
to do anything at all even close to AI to read the entire primer; it
takes an hour.
The paradox is that all STEM education is reductionist but 99% of
human problem solving is holistic. You can make breakfast without
doing science, or even without reasoning, by just repeating what
worked yesterday. You do not compute a differential equation when
braking for a stop light. This paradox is the reason for the
reductionist blindness among programmers and mathematicians who move
into AI that prevents them from seeing the importance of concepts like
understanding itself, overlapping deeply nested patterns, emergence,
reduction, salience, etc. which are all part of the epistemology of
understanding. And the STEM-induced desire to use science for
everything has prevented us from building pre-scientific machines that
jump to conclusions on scant evidence. Because that’s what
intelligence does. Scant evidence is all we’ll ever have.
If you make (through programming) a model of (a part of) the world,
then you have written a program that can solve certain problems in
that domain. It is not a general AI, because you restricted it to the
The only alternative is to make an empty system that knows nothing but
is capable of learning anything. Any claims of having created an AGI –
a “general” AI – in a system that cannot learn (specifically, learn
from its own mistakes) are hyperbole. This is the basis for my decade-long
push and evangelism for “Model Free Methods”. Deep Learning, when
it arrived, provided lots of support for this idea.
Asif: Many people believe that Artificial Intelligence could bring the apocalypse, what are your views?
Monica: AI completely changed in 2012 when Deep Learning demonstrated to a
a wide audience that holistic (model-free) methods could beat all
reductionist (model-based and hand programmed) attempts at many AI
level tasks. And all talk of an AI apocalypse, starting with the
theory of recursive self-improvement, and most of the fearmongering
coming from Boström and MIRI and others, came from classical AI and
can now be dismissed as quaint misunderstandings from a previous era.
Reductionist AI is dead, it will never work, and most of the dangers
we’ve heard about for decades can be ignored as based on an obsolete
understanding of what AI can and cannot be. Again, Epistemology will
tell you what will work and what won’t. All improvement of our AIs
will come through more learning and more efficient methods to learn.
And we can control learning to any degree we want. Most AIs in the
coming decade will live in the cloud, may not be able to learn
anything after they were installed, and may have average lifespans on
the order of milliseconds in order to complete some simple
understanding task… after which we discard it. It’s hard to see
these frozen understanders spontaneously becoming harmful.
There are still dangers; we cannot rule out human abuse of AI capabilities.
Asif: What books have influenced your thoughts the most?
Monica: Mostly Tor Nørretranders: “The User Illusion”, but this just provided
the capstone to decades of studying holism and reductionism and their
respective advantages for problem-solving. William Calvin’s books, and
especially “The Cerebral Symphony” are also important because they
show how useful evolutionary computation can be for understanding and
thinking. Robert Pirsig’s “Zen and the Art of Motorcycle Maintenance”
highlights the reductionist-holistic split in society and our
Asif: What advice would you give startups when starting their journey in Artificial Intelligence?
Monica: AI startups are not much different from other startups. Make a small
safe bet to make some money. Bet on something further out and you may
fail or make lots of money. I recommend against going into an
established existing domain unless you have a serious algorithmic
advantage. A small AI company opening a new market using yesterday’s
state of the art machine learning is most likely to succeed. And just
because Deep Learning works well doesn’t mean that all the things
we’ve been labeling “AI” over the years will work better today.
Chatbots are a case in point; I believe that the market still hasn’t
seen any Deep Learning or other AI breakthroughs worth founding a
chatbot company on. I expect that to change only with the availability of
our Understanding Machines.