Artificial Intelligence – AIS Home | Assured Information Security https://www.ainfosec.com Tue, 16 Apr 2024 15:53:40 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://www.ainfosec.com/wp-content/uploads/2022/10/cropped-ais-icon-1-32x32.png Artificial Intelligence – AIS Home | Assured Information Security https://www.ainfosec.com 32 32 Artificial Intelligence vs Computational Intelligence https://www.ainfosec.com/artificial-intelligence-vs-computational-intelligence?utm_source=rss&utm_medium=rss&utm_campaign=artificial-intelligence-vs-computational-intelligence Mon, 15 Apr 2024 20:17:33 +0000 https://www.ainfosec.com/?p=19556 ...]]>
blogpic-jeff_durst

Author: Jeff, Senior Research Scientist
Five minute read

Suffice it to say, Artificial Intelligence (AI) is enjoying its moment in the sun. From science to games to art, AI has become present in nearly everything. But, what about AI’s oft-overlooked cousin, Computational Intelligence (CI)? The two terms are often used synonymously, which makes sense given that CI is considered a sub-field of AI. After all, the term CI was coined at the same time and same place as AI – the original Turing Test. However, AI and CI are quite different in form, function and application. This blog aims to shed some light on the AI vs. CI debate and how AIS is using CI to improve context-aware optimization.

AI

From a 50,000-foot view, AI is defined as a branch of computer science that aims to give a machine the ability to make decisions based on pre-trained knowledge. AI aims to give an algorithm a facsimile of human cognition. Meaning the ultimate goal of AI algorithms is to use prior information to make future decisions. Currently, the most notorious example of applied AI is generative large language models (LLM), like ChatGPT.

CI

While CI falls under the umbrella of AI, it is unique in one keyway – CI is used to make predictions using inherently uncertain and unstructured environments. AI algorithms are designed to deal with noise and randomness in their training data. However, they assume their target application environment is deterministic. For example, an LLM knows it is working with discrete units of text. Regardless of the input and output, an LLM knows it will be working with exact words. Put simply, “cat” goes in, “dog” comes out.

On the other hand, CI algorithms are built to work with imprecise data collected in an environment that is also itself imprecise. CI algorithms do not expect deterministic input nor provide deterministic outputs. That is, an LLM expects “cat” and does not understand “ecat” or “catd”. On the other hand, a CI algorithm can structure these almost-words back into “cat” by understanding the core concepts of text generation.

CI algorithms are built to work with imprecise data collected in an environment that is also itself imprecise.

For example, one common application of CI is in positioning and localization with limited GPS access. Onboard positioning sensors, like a wheel encoder or gyroscope, provide noisy and messy data. On the other hand, a GPS provides much more accurate data, but only when available. A traditional localization algorithm is only as certain as its most certain input, meaning that it will fail without near-constant GPS input. However, a CI solution that cleverly combines all data sources can maintain a highly accurate position during periods of GPS dropout. This ability to work with such uncertain information comes from CI’s ability to understand context.

Context-Aware CI

The main feature that distinguishes CI from other AI is its ability to use environmental context. CI algorithms keep track of not only their outputs but also the accuracy of their outputs over time. Put from the human cognition perspective, CI can use its “context clues” to help solve a problem. These context clues are the accuracy of CI outputs over time as a function of the accuracy of CI inputs over time.

As such, a CI algorithm can figure out that “ecat” and “catd” equal “cat”, so long as the word is contained in a known context, e.g., a sentence like “the catd ate its ecat food”. The more complicated example of positioning and navigation using context looks like:

  • The CI algorithm takes input from GPS and onboard sensors.
  • The CI algorithm determines the relative accuracy of each sensor using previous highly accurate measurements.
  • The CI algorithm learns about the uncertainty in the data and environment.
  • The CI algorithm re-optimizes in real-time to decide which data sources to use for positioning.

Developing CI at AIS

Team AIS is currently studying the intersection of CI and machine learning for complex sensor fusion applications akin to the GPS-denied localization problem. Solving for position using noisy data with CI is a well-known sensor fusion application. Building upon this foundational research, AIS is seeking to understand how CI can be carried over to developing, training and fielding advanced perception algorithms. AIS is leading research to find out if perception algorithms, e.g., target detection and tracking, can combine many visual sensors, i.e., cameras, IR sensors and LIDAR, to provide a more accurate solution than current single-sensor systems.

Our research in this area seeks to develop new perception algorithms based on context-aware CI and build new perception systems around these algorithms. To do this, we are leveraging sensor modeling and simulation (M&S) tools. By using M&S, we can not only create massive sets of sensor data but also have perfect knowledge of these data’s context. This allows us to both develop and train new sensor fusion techniques and create context-aware CI optimization techniques for deploying these techniques on advanced, multi-sensor perception systems.

]]>
AIS CEO to Participate in Artificial Intelligence Roundtable https://www.ainfosec.com/ais-ceo-to-participate-in-artificial-intelligence-roundtable?utm_source=rss&utm_medium=rss&utm_campaign=ais-participates-in-artificial-intelligence-roundtable Mon, 19 Feb 2024 19:59:39 +0000 https://www.ainfosec.com/?p=19322 ...]]>

Update: Due to the weather forecast, the roundtable discussion has been postponed. A new date is TBD.

AIS is proud to announce its CEO, Charles Green, will be participating in an upcoming roundtable discussion on artificial intelligence (AI). The free event, hosted by New York State Sen. Joseph Griffo, R-C-Rome, and Oneida County Executive Anthony J. Picente Jr., will take place on February 28 from 6:30pm to 8pm at Mohawk Valley Community College’s Rome Campus.

The roundtable will bring together elected officials, industry experts and thought leaders to explore the benefits and challenges associated with AI technology.

Green will contribute valuable insights on the topic among distinguished panelists including Anthony Martino, Director of the Northeast Cybersecurity and Forensics Center at Utica University; Dr. Andy Drozd, President, CEO and Chief Scientist of ANDRO Computational Solutions, LLC; and Stephen J. Acquario, Executive Director of the New York State Association of Counties.

“The use of artificial intelligence is becoming more and more common,” said Sen. Joseph Griffo. “Given its widespread emergence, many people have questions and/or concerns and want to learn more about this technology.”

Oneida County Executive Anthony J. Picente Jr. highlighted the importance of addressing both the potential benefits and risks associated with AI.

“AI has the potential to significantly improve many areas of our lives,” said Picente. “But we must tread carefully as we explore its capabilities and the negative aspects that could arise.”

AIS is committed to fostering informed discussions and driving advancements in technologies that benefit society as a whole.
Learn more: www.ainfosec.com/about
]]>
Explainable Artificial Intelligence (XAI) Research https://www.ainfosec.com/explainable-artificial-intelligence-xai-research?utm_source=rss&utm_medium=rss&utm_campaign=explainable-artificial-intelligence-xai-research Wed, 01 Nov 2023 20:17:36 +0000 https://www.ainfosec.com/?p=18453 ...]]>
blogpic-jeff_durst

Author: Jeff, Senior Research Scientist
Contributions from Andres, Research Scientist III

Five minute read

When it comes to finding ways to leverage Artificial Intelligence and Machine Learning (AI/ML), analysts are frequently overwhelmed by choice. Often, finding the right ways to leverage AI/ML requires in-depth technical knowledge that is hard to come by. A significant challenge that faces the community and AI/ML practitioners is how to move these tools from development to the real world. The biggest hang-up is that AI/ML models are inherently a “black box”. The black box makes it hard for users and practitioners alike to explain what happens between input and output. We understand that AI/ML models are capable of superhuman computing, yet we are unable to clearly understand the logic behind an AI agent’s decision making. A team of researchers at AIS is beginning to chip away at this problem by leading cutting-edge research in Explainable AI.

Demystifying the Black Box

By their nature, AI and AI-driven agents are inherently unexplainable. AI lacks the ability to provide context and explain its decision-making process. This lack of transparency immediately introduces uncertainty and forces potential adopters to question trustworthiness. For example, if a model or algorithm arrives at the correct answer 90% of the time, we must be able to understand why the algorithm fails the remaining 10% of the time in order to address the algorithm’s limitations. In the absence of this understanding, a model’s failures can seem random, which is unacceptable, especially if the goal is to apply AI/ML to high-risk or high-stakes applications. Thus, leaders are lukewarm when it comes to not only developing AI, but more importunately, deploying it.

Because AI lacks the ability to answer the important questions (what, when, where, how and why), we lack the ability to realize the entirety of its benefits. These answers are hidden within the difficult-to-open black box. A team of researchers, made up of AIS, Georgia Tech Research Institute (GTRI) and the Georgia Institute of Technology (GT), are taking a “psychology for AI” approach to tackle this problem. First, it is important that we make a clear distinction between interpretable and explainable AI. Interpretable AI is the relationship between model inputs and model outputs, or one’s ability to predict outputs. Explainable AI is one’s ability to understand the what, when, where, how and why of an agent’s decision making or computing process. The goal of XAI is to eliminate the black box between inputs and outputs, resulting in a transparent sequence from data going into the model to results coming out.

Progression of AI Enabled Technologies

In recent years, XAI has gained significant momentum and enthusiasm from the DoD community to progress the deployment of AI enabled technologies. Our goal is not only to better bridge the gap (interpret), but to enable exploration of the bridge between inputs and outputs (explain). We hypothesize that there will be a direct correlation between exploration and trust. As explainability matures, trust will continue to grow between the operational community and these AI/ML enabled technologies. Though we may be far from the adoption of fully autonomous systems, significant benefits can be realized with AI-enabled, human-in-the-loop technologies.

XAI Happenings at AIS

Team AIS is currently pushing explainability further by leveraging a combination of Reinforcement Learning (RL), world-models and counterfactuals (what-ifs). Reinforcement Learning looks at how agents should take actions to maximize reward. In other words, the more accurate the agent’s action choices, the higher the reward it receives. World-models are abstract representations of the AI agent’s environment. Counterfactuals, or “what-ifs”, are simply alternative choices.

This research seeks to develop an explainability method to build user trust in the policy decision process. To do this, we leverage forward projection, meaning the RL agent can explore varying paths without having to take them. Combining the forward projection capability with the counterfactual analysis allows us to visualize the on-policy action, or the factual, along with a series of off policy actions, or counterfactuals. These counterfactuals allow us to see alternative actions an agent could have taken as compared to the action the agent chose to take. This visualization allows us to analyze and better understand some of the logic behind the agent’s decision-making process.

Analyzing the Counterfactuals: The Robot Example

Take, for example, a simple robot using AI to move between floors of a building. Say you have an AI-enabled robot whose job is to navigate from the second floor of a building to the first floor. This robot can take either the stairs or the elevator. The stairs are the quickest route to the exit versus the elevator, which is some distance away. Suppose the robot chooses to take the elevator over the stairs, even though we would have anticipated that the robot would have chosen the stairs because of the proximity and speed to exit. Through our analysis of the counterfactuals, we can learn why the robot chose the elevator: its counterfactuals show that had it taken the stairs, it would have fallen. Despite the stairs being the most optimal path for speed, we as users can now see that the agent favors the route that allows it to successfully complete the task.

This research continues to show success and its viability towards becoming a reliable method for explainable AI and was recently presented by AIS at the 2023 International Joint Conference on Artificial Intelligence (IJCAI) Explainable Artificial Intelligence Workshop.

Learn more about advanced research at AIS

]]>