logo mentu -vb
  • Home
  • About us
  • Products
    • Maths
    • Shaia
  • Contact us
  • Blog
  • EN
    • ES

Acceptable Use and Secure Development Policy for Artificial Intelligence (AI)

Objective
Establish the guidelines, principles, and procedures for the acceptable use and responsible development of artificial intelligence (AI) within the organization. The policy seeks to ensure that the deployment and evolution of AI systems are carried out in an ethical, transparent, secure, and sustainable manner, aligned with national and international values and regulations.

Scope
This policy is mandatory for all employees, contractors, collaborators, and partners involved in the use, development, implementation, maintenance, and auditing of AI systems. It applies to all projects, products, and services involving the collection, processing, or analysis of data using AI techniques.

Ethical and Governance Principles
Based on the Ethical Framework for AI in Colombia[1] and the AI Roadmap Guidelines[2], the principles that govern this policy are:

  • Transparency and explainability: Ensure that processes, algorithms, and automated decisions are understandable and communicable to both internal and external users, without compromising innovation or intellectual property.

  • Human oversight: Ensure that critical decisions are supervised and, ultimately, approved by humans, avoiding full automation in high-risk contexts. The process must consider human-in-the-loop at all stages.

  • Privacy and data protection: Comply with current regulations on personal data processing, implementing anonymization mechanisms and ensuring informed consent.

  • Security and resilience: Develop and implement technical and organizational controls that protect the integrity, availability, and confidentiality of AI systems.

  • Non-discrimination and inclusion: Prevent bias and ensure that AI systems promote fairness, avoiding any form of discrimination and fostering the inclusion of historically marginalized groups.

  • Sustainability and social responsibility: Ensure that the development and use of AI contribute to social, economic, and environmental well-being, prioritizing solutions that generate collective benefit.

Acceptable Use of AI
The use of AI systems must be limited to purposes that enhance productivity, innovation, and data-driven decision-making, while always respecting the defined ethical principles.

Applications must be recognized for their capacity to prevent or control the generation of malicious or highly biased responses.

The option to share information for model training must be disabled. If this is not possible, it must be verified that data is not being used for training purposes. If these options are not available, the application should not be used. This applies to all areas and formats involving AI usage.

Use must be reasonable. If a requirement can be easily addressed through an internet search, prioritize that method. Construct prompts with sufficient information to obtain the best response with the fewest iterations.

If the AI system does not consider inclusion and diversity, apply prompt engineering techniques to achieve more appropriate results.

The following uses of AI are strictly prohibited:

  • Engaging in illegal, malicious activities or those that infringe on human rights.

  • Automating critical processes without human supervision, where safety or privacy may be compromised.

  • Manipulating, discriminating, or generating misleading information that negatively affects individuals or communities.

Guidelines for the Responsible Development of AI
The use of these tools must be guided by the stated principles at every stage, always seeking the benefit of the user and the protection of their data privacy.

AI solution development must include from the outset: ethical assessments, impact analyses (including bias analysis and robustness testing with humans), and mechanisms for continuous validation.

It is necessary to identify how varied and diverse AI results are across different scenarios in relation to the intended goal.

Final users should not be exposed to products that do not meet the expected quality standards.

All AI projects must include complete documentation that covers:

  • A description of the model (in the case of agents and trained systems), algorithms, and data used. For Mentu, this should align with the PRD (Product Requirements Document).

  • Validation procedures and implemented control mechanisms.

  • Evaluation and monitoring criteria to ensure transparency, control, and traceability of automated decisions (if applicable).

  • The objective and justification for implementation.

  • The data source, in case training is involved.

The use of open-source tools and methodologies (as long as they are secure) will be encouraged, as well as the adoption of international standards that facilitate AI interoperability, security, and governance.

Training of technical and management teams will be promoted in topics such as ethics, security, privacy, and emerging trends in AI, ensuring staff is prepared to face new challenges—such as those outlined in the OWASP AI Top Ten, which lists the main threats in specific areas and is updated annually.

Products developed will be tested with users at different stages, and automated decisions will be equally assessed.

Give priority to the best interests of children and adolescents. Avoid profiling and ensure parental consent when handling personal data of minors.

Ongoing Training and Awareness on the Safe Use of AI
Model training and adaptation processes must be carried out with data that does not allow personal identification and must use authorized data sources.

Roles and Responsibilities

  • Senior Management: Approve the policy, allocate resources, and promote an organizational culture oriented toward responsible innovation.

  • Development and IT Team: Implement and maintain AI systems in accordance with defined technical and ethical standards and best practices.

  • Data Protection Officer: Oversee compliance with ethical principles, conduct impact assessments, and coordinate follow-up actions in response to deviations or incidents. It is recommended that this process be conducted by a multidisciplinary team.

  • End Users: Use AI solutions responsibly and report any anomalies or unexpected behavior that may compromise security, privacy, or fairness.


[1] Muñoz Rodríguez, V. M., & Londoño, C. C. (2021). Ethical Framework for Artificial Intelligence in Colombia (Version 1). Administrative Department of the Presidency of the Republic.
[2] Ministry of Science, Technology and Innovation. (2024). Roadmap for the Development and Application of Artificial Intelligence in Colombia. Directorate of Technological Development and Innovation.

footer-img
At Mentu, we believe that quality education is a universal right. Technology, far from being an obstacle, is our ally to make it accessible to everyone.
  • Home
  • About us
  • Products
    • Maths
    • Shaia
  • Privacy and Security
  • Contact us
  • Blog

created with ♥︎ by Mentu.