Humans have been outsourcing tasks to computers ever since computers were invented. At first, the tasks were purely numeric. Over the years as AI has gotten smarter the range of tasks for which AI is competent has grown accordingly.

Though there is considerable clash nowadays around the limits of AI, so long as it is based on Gen2 principles it cannot replace the human in tasks that require judgment.

We believe the future of AI lies in extending human capabilities into areas where humans cannot act alone simply because the scale of decision making is beyond limits.

Sensepoint has Developed

An ontology fabric that can wrap around any Gen2 AI/ML application in any domain: whether embedded in the real world (e.g., running on unmanned vessels) or embedded in the information world (e.g., running on a server). When wrapped, the fabric exposes all of the AI/ML’s real world touch points - both on the input and on the output side.

A trust enhancer that runs on our fabric. It assesses the confidence of your embedded AI/ML application, generates plans to improve that confidence, and executes those plans – everywhere the application runs - over and over until you can trust the application won’t make dumb mistakes and you can collaborate with it like a trusted partner.

Sensepoint Solution Architecture

Our Gen 3 AI/NS solution autonomously assesses and improves the warranted confidence of your Gen2 AI/ML-driven automation until it attains human level trust

Gen3 'AI/NeuroSymbolic'
Solution layer

Ontological embedding of your Application

Your AI/ML Application

Your Gen 2 'AI/ML'
in our Ontology Fabric

Domain-specif Ontological Types w/grammar that supports a specific application

Upper Ontological Types w/grammar that are needed for and support any application

Computing Logic

Our Ontology




We learn about the client’s AI/ML application and in conjunction with the client, we evaluate its performance



In Collaboration with the client we define what it means for the application to perform at a level equivalent to that of a human expert



We embed the client’s AI/ML application in our ontology fabric



In collaboration with the client, we specialize our trust enhancer to autonomously assess and improve the confidence of every decision made by the client’s AI/ML Application



We continuously monitor the performance of the Gen3 wrapped AI/ML application and the overall system

Automation has arrived!

From Corporate Analytics and Data Science/AIML to self-driving cars and other autonomous systems, AI software algorithms running on computers are producing decisions and taking actions that have consequences in the real world: approving loans, deciding prices, choosing routes, optimizing logistics, as well as identifying things like people, products, contaminants and ships.

It lacks sufficient trust

The problem with AI automated ‘AIA’ systems is how to trust that they perform with at least the same amount of accuracy and precision as the humans they’re replacing and how to trust that they do not make the kind of stupid mistakes that no human would make; especially important ones.

Although AIA systems provide data-based confidence scores, the scores are fundamentally internal assessments of the AI process; (e.g., how close the given data corresponds to what was seen in training).

We need a way to evaluate and enhance trust in automated systems

What’s needed is an independent external evaluation of your AIA applications; more similar to comparing an automated finding with the judgment of a human expert. And an automated means of improving confidence when it’s insufficient; until you can really trust the AIA application.

Sensepoint solves the AIA trust problem by

  • Decomposing your AIA application into detection, reasoning, and response modules
  • Wrapping our ontology fabric around each module
  • Running our trust improver in the fabric that embeds your AIA applications
    • Evaluating the warranted trust of each module
    • Reasoning about how and creating plans to improve that trust
    • Executing plans to improve warranted trust in the AIA application module (as permitted by the client)

Sensepoint-ai can be used to evaluate and enhance trust in:

  • Autonomous systems
  • Automated processes running on corporate analytics
  • Data science or stand-along AIML projects

Meet the Founders

Erik Thomsen

For over 20 years Erik has pushed the envelope of ontologically-grounded analytic/intelligent software technologies and the applications they support with a recent emphasis on incorporating dynamic knowledge into the control of the training, deployment and validation of AI/ML algorithms – a DARPA Gen3AI problem.

Joss Stubblefield

For more than 20 years Joss has been building and leading diverse interdisciplinary teams around technology and user needs. He was a Co-founder of Weave Visual Analytics focused on evolution, maintenance, training and support for the open source visualization platform Weave a open source web based visualization and analytics package.

Contact Us


Cambridge Innovation Center

1 Broadway

Cambridge, MA 02142

In the heart of Kendall Square.