Trustworthy AI systems

AI Solutions have become an integral part of human life with many systems designed to take decisions that affect human lives and society, such as Recommender systems, Autonomous systems, Healthcare systems and Criminal justice systems. AI/ML systems are also here to stay for a long time in some form or the other in Smart platforms, Embedded systems, Stand-alone systems, Cloud-based systems, Decision-making systems etc.

Many-a-times we are not clear how the AI/ML system comes to a particular decision, how certain the model is about its decision, and if and when the AI/ML model can be trusted. This is amplified in critical applications, especially medical diagnosis, patient monitoring, elderly assistance or autonomous driving. Another issue is that of bias, as AI/ML algorithms could learn to use traits like race, color, gender, religion etc. in unjust ways to treat people differently during decision making.

Trustworthiness encompasses the following aspects:

  • Explainability: The AI system should be able to generate explanations for the predictions/decisions taken by it.

  • Transparency: The AI system should be able to trace out why its model performed the way it did

  • Interpretability: The results and explanations generated by the AI system should be understandable by the users/ domain experts in the context

  • Accountability: Addresses the question of who is to be held responsible for the decisions made

  • Accuracy / Correctness: Measures of how well the AI/ML model is performing its decision making

  • Safety and Security: Determines how well the data, processing and results generated are safe and secure in the AI/ML model

  • Completeness: determines how well does the model covers the decision-making space

  • Verification, Validation: how well does the AI system meets its specifications and requirements

  • Reliability: determine if AI system will perform well for a given time period

  • Availability: determine if AI system will not choke-up due to its data and decision-making process

  • Robustness: ability to handle unforeseen (usually minor) changes in the input space

  • Fairness: equal opportunities for all groups in the decision-making space

  • Bias: systematic error in data or model

  • Scope: boundaries of requirements, specification and decision-making

  • Societal aspects: evaluate effect of AI system on society and behaviour

Given that there are many systems in a client’s business with AI embedded in some form, it may also require a specialized effort to identify such systems and evaluate it for Trustworthiness and then have methods and tools to improve that system for better levels of Trustworthiness.

Composite Software Systems helps clients in identifying, defining, scoping, evaluating and improving AI/ML systems in the larger ecosystems of the client’s environment.


We help develop methods, tools and platforms that address various Trustworthiness aspects holistically. We also help develop a systematic method to evaluate Trustworthiness for systems that can be followed by client service lines. Our people also contribute to standards being setup on AI/ML systems.


We help identify and isolate AI/ML systems from traditional systems and offer trustworthiness evaluation of such AI/ML systems. We also help use that evaluation to improve AI/ML systems or pitch the AI/ML system in other ecosystems.