
Intelligent Multi-Agent Robotic Corporation (IMARC) pledges to observe, adhere to and where possible to advance the May 2019 Organisation for Economic Co-operation and Development (OECD) Principles on Artificial Intelligence, as elaborated in the OECD.AI Policy Observatory established in February 2020.
The table below includes the Principles on Artificial Intelligence outlined by the OECD. Our systems are designed to provide clients with the most advanced artificial intelligence technology. We start with privacy in mind as we build new technologies, explore data, or work with providers in industry and government.
We utilize the latest in first-party patented technologies to ensure both compliance and user opt-in when we are capturing or ingesting data. This allows us to capture data provenance from the beginning of the chain of custody to the analytics and medical support.
As new cities, states, and countries enact legislation our dynamic components ensure that we can modify our data architecture, collection technologies, and patient monitoring to proactively protect information, privacy, and citizen concerns.
Our technologies and data collection meet current regulations such as GDPR, CCPA, and Brazil’s LPDR which will be enacted in February 2021. We also ensure that we are at the forefront of industry best practices related to privacy, encryption, anonymization, data storage, data transmission, and retention policies.
The OECD has approved the use of its logo and Principles on AI on our website. The OECD has not participated in, approved, endorsed, or otherwise supported the projects of IMARC or IMARC Robotics, Inc.. Click here to read more about the OECD’s work on Artificial Intelligence and rationale for developing the OECD Recommendation on Artificial Intelligence.
The table below illustrates how IMARC’s AI Emergency Medical Managment System upholds the OECD Principles on Artificial Intelligence.

OECD PRINCIPLES ON ARTIFICIAL INTELLIGENCE

IMARC AI EMERGENCY MEDICAL MANAGEMENT SYSTEM
PRINCIPLE 1.1: Inclusive growth, sustainable development and well-being:
- This Principle highlights the potential for trustworthy AI to contribute to overall growth and prosperity for all – individuals, society, and planet – and advance global development objectives.
- Stakeholders should proactively engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet, such as augmenting human capabilities and enhancing creativity, advancing inclusion of underrepresented populations, reducing economic, social, gender and other inequalities, and protecting natural environments, thus invigorating inclusive growth, sustainable development and well-being.
IMARC AI EMERGENCY MEDICAL MANAGEMENT SYSTEM
- The FOMBIT emergency medical management system is designed to enable countries to combat the CD-19 (and any similar pandemic in the future), precisely to help preserve a country’s ability to continue economic growth activities while fighting the virus.
- The “Forward” aspect of the FOMBIT is designed for deployment into the peripheries of societies, to reach those population groups underserved by regular existing medical facilities.
- A key characteristic of the FOMBIT approach is that it economizes resources, enhances medical service productivity and reduces cost, so sustaining economic development and growth
PRINCIPLE 1.2: Human-centered values and fairness:
- AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and should include appropriate safeguards to ensure a fair and just society.
- AI actors should respect the rule of law, human rights and democratic values, throughout the AI system lifecycle. These include freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, diversity, fairness, social justice, and internationally recognized labor rights.
- To this end, AI actors should implement mechanisms and safeguards, such as capacity for human determination, that are appropriate to the context and consistent with the state of art.
IMARC AI EMERGENCY MEDICAL MANAGEMENT SYSTEM
- .IMARC AI services will be offered only to licensed practitioners operating with the existing legal framework;
- It will not permit its services to be used in any way that discriminates and its designated purpose is to enhance country capacities to expand the reach of medical services to all segments of a population
- Privacy concerns are directly addressed by HIPPA regulations
- A key feature of FOMBIT is that it offers expanded services while reducing both the number and the risk to medical personnel, thus offering a more even and just playing field in the service/risk trade-off.
- The essential FOMBIT purpose is to empower medical teams to use superior, state of the art, cost-effective AI-based services to supplement their own judgments, and enhance their productivity for the benefit of entire communities.
PRINCIPLE 1.3: Transparency and explainability:
- This principle is about transparency and responsible disclosure around AI systems to ensure that people understand when they are engaging with them and can challenge outcomes.
- AI Actors should commit to transparency and responsible disclosure regarding AI systems. To this end, they should provide meaningful information, appropriate to the context, and consistent with the state of art:
– to foster a general understanding of AI systems,
– to make stakeholders aware of their interactions with AI systems, including in the workplace,
– to enable those affected by an AI system to understand the outcome, and, - To enable those adversely affected by an AI system to challenge its outcome based on plain and easy-to-understand information on the factors, and the logic that served as the basis for the prediction, recommendation or decision.
IMARC AI EMERGENCY MEDICAL MANAGEMENT SYSTEM
- The essence of the INTELLIGENT TELEMEDICINE embodied within FOMBIT approach involves a collaboration with each patient to build a personally-derived, AI assisted data base (in the form of an Avatar) whose parameters will be fully visible to the patients and his/her care givers subject to all HIPPA regulations.
- The entire process, from testing, diagnosis, to treatment and further, post-incident monitoring will involve a collaboration with the patient and the care givers based on transparent interpretations of emerging data
- All algorythms used in the analysis and transposition of medical data will be transparently available to patients who may seek explanation for the basis for emerging diagnosis, triage of other formulations
PRINCIPLE 1.4: Robustness, security and safety:
-
AI systems should be robust, secure and safe throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable safety risk.
-
To this end, AI actors should ensure traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle, to enable analysis of the AI system’s outcomes and responses to inquiry, appropriate to the context and consistent with the state of art.
-
AI actors should, based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on a continuous basis to address risks related to AI systems, including privacy, digital security, safety and bias.
IMARC AI EMERGENCY MEDICAL MANAGEMENT SYSTEM
- IMARC AI systems will meet HIPPA regulation standards to ensure safety and security for patients and will be subject to annual cyber security audits.
PRINCIPLE 1.5: Accountability:
- Organizations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the OECD’s values-based principles for AI.
- AI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles, the context, and consistent with the state of art.
IMARC AI EMERGENCY MEDICAL MANAGEMENT SYSTEM
- IMARC will be accountable for upholding the AI principles for AI and will adhere to the Europeon “Ethic Guidelines for Trustworthy AI” promulgated by the Europeon Union.

The OECD’s work on Artificial Intelligence and rationale for developing the OECD Recommendation on Artificial Intelligence
AI is a general-purpose technology that has the potential to improve the welfare and well-being of people, to contribute to positive sustainable global economic activity, to increase innovation and productivity, and to help respond to key global challenges. It is deployed in many sectors ranging from production, finance, and transport to healthcare and security.
Alongside benefits, AI also raises challenges for our societies and economies, notably regarding economic shifts and inequalities, competition, transitions in the labor market, and implications for democracy and human rights.
The OECD has undertaken empirical and policy activities on AI in support of the policy debate over the past two years, starting with a Technology Foresight Forum on AI in 2016 and an international conference on AI: Intelligent Machines, Smart Policies in 2017. The Organization also conducted analytical and measurement work that provides an overview of the AI technical landscape, maps economic and social impacts of AI technologies and their applications, identifies major policy considerations and describes AI initiatives from governments and other stakeholders at national and international levels.
This work has demonstrated the need to shape a stable policy environment at the international level to foster trust in and adoption of AI in society. Against this background, the OECD Committee on Digital Economy Policy (CDEP) agreed to develop a draft Council Recommendation to promote a human-centric approach to trustworthy AI, that fosters research, preserves economic incentives to innovate, and applies to all stakeholders.
An inclusive and participatory process for developing the Recommendation
The development of the Recommendation was participatory in nature, incorporating input from a broad range of sources throughout the process. In May 2018, the CDEP agreed to form an expert group to scope principles to foster trust in and adoption of AI, with a view to developing a draft Council Recommendation in the course of 2019. The AI Group of experts at the OECD (AIGO) was subsequently established, comprising over 50 experts from different disciplines and different sectors (government, industry, civil society, trade unions, the technical community and academia).
Between September 2018 and February 2019 the group held four meetings: in Paris, France, in September and November 2018, in Cambridge, MA, United States, at the Massachusetts Institute of Technology (MIT) in January 2019, back to back with the MIT AI Policy Congress, and finally in Dubai, United Arab Emirates, at the World Government Summit in February 2019. The work benefited from the diligence, engagement and substantive contributions of the experts participating in AIGO, as well as from their multi-stakeholder and multidisciplinary backgrounds. Drawing on the final output document of the AIGO, a draft Recommendation was developed in the CDEP and with the consultation of other relevant OECD bodies. The CDEP approved a final draft Recommendation and agreed to transmit it to the OECD Council for adoption in a special meeting on 14-15 March 2019. The OECD Council adopted the Recommendation at its meeting at the Ministerial level on 22-23 May 2019.
For more information, visit the OECD website at: OECD.org.

PRINCIPLES ON ARTIFICIAL INTELLIGENCE
PRINCIPLE 1.1: Inclusive growth, sustainable development and well-being:
- This Principle highlights the potential for trustworthy AI to contribute to overall growth and prosperity for all – individuals, society, and planet – and advance global development objectives.
- Stakeholders should proactively engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet, such as augmenting human capabilities and enhancing creativity, advancing inclusion of underrepresented populations, reducing economic, social, gender and other inequalities, and protecting natural environments, thus invigorating inclusive growth, sustainable development and well-being.
PRINCIPLE 1.2: Human-centered values and fairness:
- AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and should include appropriate safeguards to ensure a fair and just society.
- AI actors should respect the rule of law, human rights and democratic values, throughout the AI system lifecycle. These include freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, diversity, fairness, social justice, and internationally recognized labor rights.
- To this end, AI actors should implement mechanisms and safeguards, such as capacity for human determination, that are appropriate to the context and consistent with the state of art.
PRINCIPLE 1.3: Transparency and explainability:
- This principle is about transparency and responsible disclosure around AI systems to ensure that people understand when they are engaging with them and can challenge outcomes.
- AI Actors should commit to transparency and responsible disclosure regarding AI systems. To this end, they should provide meaningful information, appropriate to the context, and consistent with the state of art:
– to foster a general understanding of AI systems,
– to make stakeholders aware of their interactions with AI systems, including in the workplace,
– to enable those affected by an AI system to understand the outcome, and, - To enable those adversely affected by an AI system to challenge its outcome based on plain and easy-to-understand information on the factors, and the logic that served as the basis for the prediction, recommendation or decision.
PRINCIPLE 1.4: Robustness, security and safety:
-
AI systems should be robust, secure and safe throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable safety risk.
-
To this end, AI actors should ensure traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle, to enable analysis of the AI system’s outcomes and responses to inquiry, appropriate to the context and consistent with the state of art.
-
AI actors should, based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on a continuous basis to address risks related to AI systems, including privacy, digital security, safety and bias.
PRINCIPLE 1.5: Accountability:
- Organizations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the OECD’s values-based principles for AI.
- AI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles, the context, and consistent with the state of art.