My Cool AI Agent

My Cool AI AgentMy Cool AI AgentMy Cool AI Agent

My Cool AI Agent

My Cool AI AgentMy Cool AI AgentMy Cool AI Agent
  • Home
  • HIPAA & HITRUST
  • TOGAF EA Framework
  • Databricks-Fabric-Snowflk
  • Kubernetes - Docker
  • Technical Full Stack 2025
  • RAG / Vector DB LangChain
  • ML & Transformers
  • Graph Neural Networks
  • AEM Jira SQL Python
  • No-Code AI Worflows
  • Model Context Protocol
  • Azure, AWS, GCP, GitHub
  • AI Agents
  • APIs / Tools
  • Azure AI Foundry
  • Azure Data Fabric
  • Transform Sub-quadractic
  • TensorF PyTorch LangChain
  • AGI - SAI - 2027
  • More
    • Home
    • HIPAA & HITRUST
    • TOGAF EA Framework
    • Databricks-Fabric-Snowflk
    • Kubernetes - Docker
    • Technical Full Stack 2025
    • RAG / Vector DB LangChain
    • ML & Transformers
    • Graph Neural Networks
    • AEM Jira SQL Python
    • No-Code AI Worflows
    • Model Context Protocol
    • Azure, AWS, GCP, GitHub
    • AI Agents
    • APIs / Tools
    • Azure AI Foundry
    • Azure Data Fabric
    • Transform Sub-quadractic
    • TensorF PyTorch LangChain
    • AGI - SAI - 2027
  • Home
  • HIPAA & HITRUST
  • TOGAF EA Framework
  • Databricks-Fabric-Snowflk
  • Kubernetes - Docker
  • Technical Full Stack 2025
  • RAG / Vector DB LangChain
  • ML & Transformers
  • Graph Neural Networks
  • AEM Jira SQL Python
  • No-Code AI Worflows
  • Model Context Protocol
  • Azure, AWS, GCP, GitHub
  • AI Agents
  • APIs / Tools
  • Azure AI Foundry
  • Azure Data Fabric
  • Transform Sub-quadractic
  • TensorF PyTorch LangChain
  • AGI - SAI - 2027

HITRUST CSF

Download PDF

HIPAA Datasheet

Download PDF

Cloud Read Only - Backup

Download PDF

HIPAA & HITRUST - EPIC

Vanta

 HITRUST, faster | Vanta 


 Message from Vanta 


 HITRUST & HIPAA 

HITRUST CSF

Cybersecurity Framework | HITRUST 


HITRUST Framework for Cybersecurity and Compliance Success 


MyCSF Cybersecurity Compliance Framework Tool | HITRUST 


Unpacking HITRUST CSF v11: What's Changed? | Cloudticity 

HIPAA Checklist


 What is HIPAA Incident Management? 


 Thank You For Requesting Our Checklists 


 Effective HIPAA Policy Management 


 Why Compliance Officers Use Compliance Software 


 The Ultimate HIPAA Compliance Checklist for 2025 + Free PDF 

HITRUST MYCSF PORTAL

HITRUST Portal 

EPIC

 Epic Systems Modules & Software - Hyperspace, Healthy Planet, ClinDoc 


 Epic Cosmos 


 epic hyperspace - Search 


 Our Software | Epic 


 open.epic 


 open.epic :: Explore By Interface Type 


 Maximizing Impact with Epic Architecture: A Comprehensive Guide - Surety Systems 


Cloudcitity 

 Epic EHR Hosting Options Comparison – which is right for you? 


 The Challenges of Migrating Epic EHR to AWS | Cloudticity 




 

Epic EHR AI Tools

Epic Systems is at the forefront of integrating AI into its Electronic Health Record (EHR) systems, offering a range of AI tools designed to enhance patient care and streamline administrative processes. Here are some of the key AI tools and features currently in development or recently announced by Epic:

  • Generative AI: This tool personalizes patient responses, streamlines handoff summaries, and provides up-to-date insights for providers. It is designed to help doctors revise message responses, letters, and instructions into plain language that patients can understand. 1
  • Emmie: A virtual assistant that acts as a digital concierge, answering patient questions before appointments to improve patient education. It will also assist with scheduling and provide billing assistance. 1
  • Art and Penny: AI assistants for clinicians and patients, respectively, designed to act as active digital colleagues and improve provider operations, increase patient engagement, and reduce administrative burdens. 1
  • CoMET: A generative medical event model that helps doctors use real-world evidence to improve patient treatment and care decisions. 1
  • Conversational AI: This feature helps patients get ready for medical appointments by asking about the goals of their visit and summarizing information for both patients and physicians. 1
    These AI tools are part of Epic's ongoing efforts to innovate and improve the healthcare experience, leveraging the power of AI to enhance patient care and administrative efficiency. 5
  • 5 Sources

EpicArtificial Intelligence | EpicForbesElectronic Health Record Giant Epic Rolling Out New AI ToolsView all


 Epic UGM 2025: Epic touts new AI tools 


 DAX Copilot: New customization options and AI capabilities for even greater productivity - Microsoft Industry Blogs 


 https://x.com/HeyEpic/status/1956341039247036517 


 https://x.com/HeyEpic/status/1957844340882936118 

Introducing Emmie—AI that helps patients.  Informed by your chart and connected devices, Emmie can explain results with context, answer open-ended questions to guide patients through healthcare, and take actions like scheduling visits.
 

Ascension - AmSurg M/A


 Healthcare | Ascension 


 Home - Amsurg Surgery Center 

 How to Move from Paper to Cloud Software in Your ASC 

 Our Tips for Onboarding New Technology in Your ASC in 2022 


 Ascension Enters into an Agreement to Acquire AMSURG | Ascension 


 Ascension to Acquire AMSURG in ASC Blockbuster Deal - Ambulatory Surgery Center News 


 Ascension to acquire AmSurg and its 250 ASCs | Healthcare Finance News 


May 8 2024 Ascension Ransomware Attack

 What we know about the cyberattack on Ascension hospitals in Wisconsin 


 Health care giant Ascension says 5.6 million patients affected in cyberattack - Ars Technica 


 Ransomware Recovery for Hospitals | Cloudticity 


 Ransomware Recovery for Hospitals | Cloudticity 


AWS and EPIC

 HIPAA Compliance - Amazon Web Services (AWS) 


 Home - Epic on FHIR 

 Epic Systems Modules & Software - Hyperspace, Healthy Planet, ClinDoc 

 Epic Cosmos 

 Epic | ...With the patient at the heart 

 Artificial Intelligence | Epic 

 Epic eyes new AI features to consolidate stranglehold on US EHR business 

 Microsoft launches Dragon Copilot, a new voice-activated AI assistant for doctors - Tech Startups 


 

EPIC / Microsoft Dragon Copilot - DAX

Ransomware

Say something interesting about your business here.

MS Dragon Copilot - MS Fabric 

MS Dragon Copilot - MS Fabric 

MS Dragon Copilot - MS Fabric 

MS Dragon Copilot - MS Fabric 

MS DAX Copilot

HIPAA vs HITRUST

Say something interesting about your business here.

HIPAA vs HITRUST

Say something interesting about your business here.

Ransomeware

What's something exciting your business offers? Say it here.

Ransomware

Give customers a reason to do business with you.

AI Security

Give customers a reason to do business with you.

OWASP Top 10 - LLM AI Security

Give customers a reason to do business with you.

EPIC - Security for AI & Ransomware

Photo Gallery

Photo Gallery

Ascension/AMSURG - HITRUST - AI Security

Detail your services


 Healthcare | Ascension 


 Home - Amsurg Surgery Center 

 How to Move from Paper to Cloud Software in Your ASC 

 Our Tips for Onboarding New Technology in Your ASC in 2022 


 Ascension Enters into an Agreement to Acquire AMSURG | Ascension 


 Ascension to Acquire AMSURG in ASC Blockbuster Deal - Ambulatory Surgery Center News 


 Ascension to acquire AmSurg and its 250 ASCs | Healthcare Finance News 


May 8 2024 Ascension Ransomware Attack

 What we know about the cyberattack on Ascension hospitals in Wisconsin 


 Health care giant Ascension says 5.6 million patients affected in cyberattack - Ars Technica 


 Ransomware Recovery for Hospitals | Cloudticity 


 Ransomware Recovery for Hospitals | Cloudticity 


AWS and EPIC

 HIPAA Compliance - Amazon Web Services (AWS) 


 Home - Epic on FHIR 



Key Features of Epic's Cloud-Based EHR


 Epic on Azure – Epic Electronic Health Record Software | Microsoft Industry 




  1. Comprehensive EHR Solutions: Epic provides a wide range of modules tailored to various healthcare specialties, including outpatient care, emergency services, surgical management, and oncology. This modular approach allows organizations to customize their EHR to meet specific needs. 1
  2. Patient Engagement: Epic emphasizes patient engagement through its robust patient portal, MyChart, which allows patients to access their health information, schedule appointments, and communicate with healthcare providers. The system also supports telehealth options, enabling video visits and remote patient monitoring. 1
  3. AI Integration: Epic is integrating artificial intelligence into its EHR to enhance clinical decision-making and operational efficiency. This includes predictive analytics for patient outcomes and personalized treatment recommendations, making AI tools accessible within clinical workflows. 1
  4. Cloud Infrastructure: Epic's EHR can be hosted on Microsoft Azure, providing benefits such as improved scalability, security, and disaster recovery. This cloud environment allows healthcare organizations to reduce operational costs and enhance their ability to respond to changing needs. 1
  5. Interoperability: Epic's cloud-based solutions enhance interoperability with other health systems, facilitating better coordination of care and improving patient outcomes. This is particularly beneficial for organizations looking to streamline referrals and enhance communication among providers










Black Cat Ransomware

 <br/> 


 HITRUST 2.pdf 

 684af4e889e93f204d89eb43_HITRUST CSF.pdf 

 Cloudticity | Cloud for Healthcare 



 Request a demo | Collibra 

 AI governance: Key for addressing the Executive Order on safe, secure and trustworthy artificial intelligence | Collibra 




 elated search

application security framework

information security 

policy framework

data security framework

cyber security 

governance framework

nist information security framework

iso security framework

information assurance framework

cobit 5 for information security

information security maturity model

fisma framework

information security policy

information security frameworks



 2025 HIPAA Security Rule Segmentation Guide | Identity-Based Compliance Steps 


 

Information Security for Ransomware

To protect against ransomware attacks, organizations should implement a comprehensive information security strategy that includes:

  • Regular and continuous data backups: This helps limit costs from ransomware attacks and often avoids the need to pay the ransom demand. 1
  • Multi-factor authentication (MFA): Implementing MFA across the organization and for all services can significantly reduce the risk of unauthorized access. 1
  • Identity, credential, and access management (ICAM): Policies and practices that manage user identities and access to systems and applications. 1
  • Ransomware response plans: Establishing clear procedures for responding to ransomware incidents, including identifying, protecting against, detecting, responding to, and recovering from ransomware events. 2
    Organizations should also stay informed about ransomware threats and tactics by reviewing advisories and resources provided by authoritative organizations like CISA, MS-ISAC, and the FBI. 1
  • 4 Sources


 Ransomware Resilience with Immutable Backup Storage | Book a Demo Promo 


Object storage for Veeam: overview, benefits | Object First 



 Building a Ransomware Resilient Architecture | eSecurity Planet 


 


AI agent#

AI agents are artificial intelligence systems capable of responding to requests, making decisions, and performing real-world tasks for users. They use large language models (LLMs) to interpret user input and make decisions about how to best process requests using the information and resources they have available.


AI chain#

AI chains allow you to interact with large language models (LLMs) and other resources in sequences of calls to components. AI chains in n8n don't use persistent memory, so you can't use them to reference previous context (use AI agents for this).


AI embedding#

Embeddings are numerical representations of data using vectors. They're used by AI to interpret complex data and relationships by mapping values across many dimensions. Vector databases, or vector stores, are databases designed to store and access embeddings.


AI memory#

In an AI context, memory allows AI tools to persist message context across interactions. This allows you to have a continuing conversations with AI agents, for example, without submitting ongoing context with each message. In n8n, AI agent nodes can use memory, but AI chains can't.


AI tool#

In an AI context, a tool is an add-on resource that the AI can refer to for specific information or functionality when responding to a request. The AI model can use a tool to interact with external systems or complete specific, focused tasks.


AI vector store#

Vector stores, or vector databases, are databases designed to store numerical representations of information called embeddings.


API#

APIs, or application programming interfaces, offer programmatic access to a service's data and functionality. APIs make it easier for software to interact with external systems. They're often offered as an alternative to traditional user-focused interfaces accessed through web browsers or UI.


canvas (n8n)#

The canvas is the main interface for building workflows in n8n's editor UI. You use the canvas to add and connect nodes to compose workflows.


cluster node (n8n)#

In n8n, cluster nodes are groups of nodes that work together to provide functionality in a workflow. They consist of a root node and one or more sub nodes that extend the node's functionality.


credential (n8n)#

In n8n, credentials store authentication information to connect with specific apps and services. After creating credentials with your authentication information (username and password, API key, OAuth secrets, etc.), you can use the associated app node to interact with the service.


data pinning (n8n)#

Data pinning allows you to temporarily freeze the output data of a node during workflow development. This allows you to develop workflows with predictable data without making repeated requests to external services. Production workflows ignore pinned data and request new data on each execution.


editor (n8n)#

The n8n editor UI allows you to create and manage workflows. The main area is the canvas, where you can compose workflows by adding, configuring, and connecting nodes. The side and top panels allow you to access other areas of the UI like credentials, templates, variables, executions, and more.


entitlement (n8n)#

In n8n, entitlements grant n8n instances access to plan-restricted features for a specific period of time.

Floating entitlements are a pool of entitlements that you can distribute among various n8n instances. You can re-assign a floating entitlement to transfer its access to a different n8n instance.


evaluation (n8n)#

In n8n, evaluation allows you to tag and organize execution history and compare it against new executions. You can use this to understand how your workflow performs over time as you make changes. In particular, this is useful while developing AI-centered workflows.


expression (n8n)#

In n8n, expressions allow you to populate node parameters dynamically by executing JavaScript code. Instead of providing a static value, you can use the n8n expression syntax to define the value using data from previous nodes, other workflows, or your n8n environment.


LangChain#

LangChain is an AI-development framework used to work with large language models (LLMs). LangChain provides a standardized system for working with a wide variety of models and other resources and linking different components together to build complex applications.


Large language model (LLM)#

Large language models, or LLMs, are AI machine learning models designed to excel in natural language processing (NLP) tasks. They're built by training on large amounts of data to develop probabilistic models of language and other data.


node (n8n)#

In n8n, nodes are individual components that you compose to create workflows. Nodes define when the workflow should run, allow you to fetch, send, and process data, can define flow control logic, and connect with external services.


project (n8n)#

n8n projects allow you to separate workflows, variables, and credentials into separate groups for easier management. Projects make it easier for teams to collaborate by sharing and compartmentalizing related resources.


root node (n8n)#

Each n8n cluster node contains a single root nodes that defines the main functionality of the cluster. One or more sub nodes attach to the root node to extend its functionality.


sub node (n8n)#

n8n cluster nodes consist of one or more sub nodes connected to a root node. Sub nodes extend the functionality of the root node, providing access to specific services or resources or offering specific types of dedicated processing, like calculator functionality, for example.


template (n8n)#

n8n templates are pre-built workflows designed by n8n and community members that you can import into your n8n instance. When using templates, you may need to fill in credentials and adjust the configuration to suit your needs.


trigger node (n8n)#

A trigger node is a special node responsible for executing the workflow in response to certain conditions. All production workflows need at least one trigger to determine when the workflow should run.


workflow (n8n)#

An n8n workflow is a collection of nodes that automate a process. Workflows begin execution when a trigger condition occurs and execute sequentially to achieve complex tasks.

 

Detail your services

 

OWASP AI Security and Privacy Guide

The OWASP AI security & privacy guide consists of two parts:

  1. How to address AI security: 200+ pages of material presented as the OWASP AI Exchange website
  2. How to address AI privacy

Artificial Intelligence (AI) is on the rise and so are the concerns regarding AI security and privacy. This guide is a working document to provide clear and actionable insights on designing, creating, testing, and procuring secure and privacy-preserving AI systems.

See also this useful recording or the slides from Rob van der Veer’s talk at the OWASP Global appsec event in Dublin on February 15 2023, during which this guide was launched. And check out the Appsec Podcast episode on this guide (audio,video), or the September 2023 MLSecops Podcast. If you want the short story, check out the 13 minute AI security quick-talk.

Please provide your input through pull requests / submitting issues (see repo) or emailing the project lead, and let’s make this guide better and better. Many thanks to Engin Bozdag, lead privacy architect at Uber, for his great contributions.


How to address AI security

This content is now found at the OWASP AI exchange and feeds straight into international standards.

How to address AI privacy

Privacy principles and requirements come from different legislations (e.g. GDPR, LGPD, PIPEDA, etc.) and privacy standards (e.g. ISO 31700, ISO 29100, ISO 27701, FIPS, NIST Privacy Framework, etc.). This guideline does not guarantee compliance with privacy legislation and it is also not a guide on privacy engineering of systems in general. For that purpose, please consider work from ENISA, NIST, mplsplunk, OWASP and OpenCRE. The general principle for engineers is to regard personal data as ‘radioactive gold’. It’s valuable, but it’s also something to minimize, carefully store, carefully handle, limit its usage, limit sharing, keep track of where it is, etc.

In this section, we will discuss how privacy principles apply to AI systems:


1. Use Limitation and Purpose Specification

Essentially, you should not simply use data collected for one purpose (e.g. safety or security) as a training dataset to train your model for other purposes (e.g. profiling, personalized marketing, etc.) For example, if you collect phone numbers and other identifiers as part of your MFA flow (to improve security ), that doesn’t mean you can also use it for user targeting and other unrelated purposes. Similarly, you may need to collect sensitive data under KYC requirements, but such data should not be used for ML models used for business analytics without proper controls.

Some privacy laws require a lawful basis (or bases if for more than one purpose) for processing personal data (See GDPR’s Art 6 and 9). Here is a link with certain restrictions on the purpose of an AI application, like for example the prohibited practices in the European AI Act such as using machine learning for individual criminal profiling. Some practices are regarded as too riskful when it comes to potential harm and unfairness towards individuals and society.

Note that a use case may not even involve personal data, but can still be potentially harmful or unfair to indiduals. For example: an algorithm that decides who may join the army, based on the amount of weight a person can lift and how fast the person can run. This data can not be used to reidentify individuals (with some exceptions), but still the use case may be unrightfully unfair towards gender (if the algorithm for example is based on an unfair training set).

In practical terms, you should reduce access to sensitive data and create anonymized copies for incompatible purposes (e.g. analytics). You should also document a purpose/lawful basis before collecting the data and communicate that purpose to the user in an appropriate way.

New techniques that enable use limitation include:

  • data enclaves: store pooled personal data in restricted secure environments
  • federated learning: decentralize ML by removing the need to pool data into a single location. Instead, the model is trained in multiple iterations at different sites.


2. Fairness

Fairness means handling personal data in a way individuals expect and not using it in ways that lead to unjustified adverse effects. The algorithm should not behave in a discriminating way. (See also this article). Furthermore: accuracy issues of a model becomes a privacy problem if the model output leads to actions that invade privacy (e.g. undergoing fraud investigation). Accuracy issues can be caused by a complex problem, insufficient data, mistakes in data and model engineering, and manipulation by attackers. The latter example shows that there can be a relation between model security and privacy.

GDPR’s Article 5 refers to “fair processing” and EDPS’ guideline defines fairness as the prevention of “unjustifiably detrimental, unlawfully discriminatory, unexpected or misleading” processing of personal data. GDPR does not specify how fairness can be measured, but the EDPS recommends the right to information (transparency), the right to intervene (access, erasure, data portability, rectify), and the right to limit the processing (right not to be subject to automated decision-making and non-discrimination) as measures and safeguard to implement the principle of fairness.

In the literature, there are different fairness metrics that you can use. These range from group fairness, false positive error rate, unawareness, and counterfactual fairness. There is no industry standard yet on which metric to use, but you should assess fairness especially if your algorithm is making significant decisions about the individuals (e.g. banning access to the platform, financial implications, denial of services/opportunities, etc.). There are also efforts to test algorithms using different metrics. For example, NIST’s FRVT project tests different face recognition algorithms on fairness using different metrics.

The elephant in the room for fairness across groups (protected attributes) is that in situations a model is more accurate if it DOES discriminate protected attributes. Certain groups have in practice a lower success rate in areas because of all kinds of societal aspects rooted in culture and history. We want to get rid of that. Some of these aspects can be regarded as institutional discrimination. Others have more practical background, like for example that for language reasons we see that new immigrants statistically tend to be hindered in getting higher education. Therefore, if we want to be completely fair across groups, we need to accept that in many cases this will be balancing accuracy with discrimination. In the case that sufficient accuracy cannot be attained while staying within discrimination boundaries, there is no other option than to abandon the algorithm idea. For fraud detection cases, this could for example mean that transactions need to be selected randomly instead of by using an algorithm.

A machine learning use case may have unsolvable bias issues, that are critical to recognize before you even start. Before you do any data analysis, you need to think if any of the key data elements involved have a skewed representation of protected groups (e.g. more men than women for certain types of education). I mean, not skewed in your training data, but in the real world. If so, bias is probably impossible to avoid - unless you can correct for the protected attributes. If you don’t have those attributes (e.g. racial data) or proxies, there is no way. Then you have a dilemma between the benefit of an accurate model and a certain level of discrimination. This dilemma can be decided on before you even start, and save you a lot of trouble.

Even with a diverse team, with an equally distributed dataset, and without any historical bias, your AI may still discriminate. And there may be nothing you can do about it.
For example: take a dataset of students with two variables: study program and score on a math test. The goal is to let the model select students good at math for a special math program. Let’s say that the study program ‘computer science’ has the best scoring students. And let’s say that much more males then females are studying computer science. The result is that the model will select more males than females. Without having gender data in the dataset, this bias is impossible to counter.


3. Data Minimization and Storage Limitation

This principle requires that you should minimize the amount, granularity and storage duration of personal information in your training dataset. To make it more concrete:

  • Do not collect or copy unnecessary attributes to your dataset if this is irrelevant for your purpose
  • Anonymize the data where possible. Please note that this is not as trivial as “removing PII”. See WP 29 Guideline
  • If full anonymization is not possible, reduce the granularity of the data in your dataset if you aim to produce aggregate insights (e.g. reduce lat/long to 2 decimal points if city-level precision is enough for your purpose or remove the last octets of an ip address, round timestamps to the hour)
  • Use less data where possible (e.g. if 10k records are sufficient for an experiment, do not use 1 million)
  • Delete data as soon as possible when it is no longer useful (e.g. data from 7 years ago may not be relevant for your model)
  • Remove links in your dataset (e.g. obfuscate user id’s, device identifiers, and other linkable attributes)
  • Minimize the number of stakeholders who accesses the data on a “need to know” basis

There are also privacy-preserving techniques being developed that support data minimization:

  • distributed data analysis: exchange anonymous aggregated data
  • secure multi-party computation: store data distributed-encrypted

Further reading:

  • ICO guidance on AI and data protection
  • FPF case-law analysis on automated decision making


4. Transparency

Privacy standards such as FIPP or ISO29100 refer to maintaining privacy notices, providing a copy of user’s data upon request, giving notice when major changes in personal data processing occur, etc.

GDPR also refers to such practices but also has a specific clause related to algorithmic-decision making. GDPR’s Article 22 allows individuals specific rights under specific conditions. This includes getting a human intervention to an algorithmic decision, an ability to contest the decision, and get a meaningful information about the logic involved. For examples of “meaningful information”, see EDPS’s guideline. The US Equal Credit Opportunity Act requires detailed explanations on individual decisions by algorithms that deny credit.

Transparency is not only needed for the end-user. Your models and datasets should be understandable by internal stakeholders as well: model developers, internal audit, privacy engineers, domain experts, and more. This typically requires the following:

  • proper model documentation: model type, intent, proposed features, feature importance, potential harm, and bias
  • dataset transparency: source, lawful basis, type of data, whether it was cleaned, age. Data cards is a popular approach in the industry to achieve some of these goals. See Google Research’s paper and Meta’s research.
  • traceability: which model has made that decision about an individual and when?
  • explainability: several methods exist to make black-box models more explainable. These include LIME, SHAP, counterfactual explanations, Deep Taylor Decomposition, etc. See also this overview of machine learning interpretability and this article on the pros and cons of explainable AI.


5. Privacy Rights

Also known as “individual participation” under privacy standards, this principle allows individuals to submit requests to your organization related to their personal data. Most referred rights are:

  1. right of access/portability: provide a copy of user data, preferably in a machine-readable format. If data is properly anonymized, it may be exempted from this right.
  2. right of erasure: erase user data unless an exception applies. It is also a good practice to re-train your model without the deleted user’s data.
  3. right of correction: allow users to correct factually incorrect data. Also, see accuracy below
  4. right of object: allow users to object to the usage of their data for a specific use (e.g. model training)


6. Data accuracy

You should ensure that your data is correct as the output of an algorithmic decision with incorrect data may lead to severe consequences for the individual. For example, if the user’s phone number is incorrectly added to the system and if such number is associated with fraud, the user might be banned from a service/system in an unjust manner. You should have processes/tools in place to fix such accuracy issues as soon as possible when a proper request is made by the individual.

To satisfy the accuracy principle, you should also have tools and processes in place to ensure that the data is obtained from reliable sources, its validity and correctness claims are validated and data quality and accuracy are periodically assessed.


7. Consent

Consent may be used or required in specific circumstances. In such cases, consent must satisfy the following:

  1. obtained before collecting, using, updating, or sharing the data
  2. consent should be recorded and be auditable
  3. consent should be granular (use consent per purpose, and avoid blanket consent)
  4. consent should not be bundled with T&S
  5. consent records should be protected from tampering
  6. consent method and text should adhere to specific requirements of the jurisdiction in which consent is required (e.g. GDPR requires unambiguous, freely given, written in clear and plain language, explicit and withdrawable)
  7. Consent withdrawal should be as easy as giving consent
  8. If consent is withdrawn, then all associated data with the consent should be deleted and the model should be re-trained.

Please note that consent will not be possible in specific circumstances (e.g. you cannot collect consent from a fraudster and an employer cannot collect consent from an employee as there is a power imbalance). If you must collect consent, then ensure that it is properly obtained, recorded and proper actions are taken if it is withdrawn.


8. Model attacks

See the security section for security threats to data confidentiality, as they of course represent a privacy risk if that data is personal data. Notable: membership inference, model inversion, and training data leaking from the engineering process. In addition, models can disclose sensitive data that was unintendedly stored during training.


Scope boundaries of AI privacy

As said, many of the discussion topics on AI are about human rights, social justice, safety and only a part of it has to do with privacy. So as a data protection officer or engineer it’s important not to drag everything into your responsibilities. At the same time, organizations do need to assign those non-privacy AI responsibilities somewhere.


Before you start: Privacy restrictions on what you can do with AI

The GDPR does not restrict the applications of AI explicitly but does provide safeguards that may limit what you can do, in particular regarding Lawfulness and limitations on purposes of collection, processing, and storage - as mentioned above. For more information on lawful grounds, see article 6

In an upcoming update, more will be discussed on the US AI bill of rights.

The US Federal Trade Committee provides some good (global) guidance in communicating carefully about your AI, including not to overpromise.

The EU AI act does pose explicit application limitations, such as mass surveillance, predictive policing, and restrictions on high-risk purposes such as selecting people for jobs. In addition, there are regulations for specific domains that restrict the use of data, putting limits to some AI approaches (e.g. the medical domain).

The EU AI Act in a nutshell:

Human rights are at the core of the AI Act, so risks are analyzed from a perspective of harmfulness to people.

The Act identifies four risk levels for AI systems:

  • Unacceptable risk: will be banned. Includes: Manipulation of people, social scoring, and real-time remote biometric identification (e.g. face recognition with cameras in public space).
  • High risk: products already under safety legislation, plus eight areas (including critical infrastructure and law enforcement). These systems need to comply with a number of rules including the a security risk assessment and conformity with harmonized (adapted) AI security standards OR the essential requirements of the Cyber Resilience Act (when applicable).
  • Limited risk: has limited potential for manipulation. Should comply with minimal transparency requirements to users that would allow users to make informed decisions. After interacting with the applications, the user can then decide whether they want to continue using it.
  • Minimal/non risk: the remaining systems.

So organizations will have to know their AI initiatives and perform high-level risk analysis to determine the risk level.

AI is broadly defined here and includes wider statistical approaches and optimization algorithms.

Generative AI needs to disclose what copyrighted sources were used, and prevent illegal content. To illustrate: if OpenAI for example would violate this rule, they could face a 10 billion dollar fine.

Links:

  • Original draft AI Act
  • Amendments
  • More information

Further reading on AI privacy

  • NIST AI Risk Management Framework 1.0
  • PLOT4ai threat library
  • Algorithm audit non-profit organisation
  • For pure security aspects: see the ‘Further reading on AI security’ above in this document

Project status

This page is the current outcome of the project. The goal is to collect and present the state of the art on these topics through community collaboration. First in the form of this page, and later in other document forms. Please provide your input through pull requests / submitting issues (see repo) or emailing the project lead, and let’s make this guide better and better.

The work in this guide will serve as input to the upcoming ISO/IEC 27090 (AI security) and 27091 (AI privacy) standards, which will be done through membership of ISO/IEC JTC1/SC27/WG4, WG5, CEN/CENELEC JTC 21/WG1-TG, and the SC42 AHG4 group.

 


AI Security 2025 


Summary

  • How Hackers Are Weaponizing AI (And Why That's Good News for Your Career)
  • How Defenders Are Fighting Back (And Where You Fit In)
  • Building Your AI-Security Skill Stack
  • The Learning Path: Where to Start
  • The Bottom Line

How Hackers Are Weaponizing AI (And Why That's Good News for Your Career)

The bad guys aren't sitting around waiting for defenders to catch up. They're already using AI to make their attacks faster, smarter, and more effective. Understanding these techniques gives you career-relevant knowledge that makes you valuable to employers.

AI-Powered Social Engineering Iranian hacking groups like Charming Kitten are using AI to craft personalized phishing messages that are virtually indistinguishable from legitimate communications. They're building sophisticated systems that analyze targets' social media profiles, writing styles, and professional networks to create highly targeted attacks.

Companies need security analysts who can spot these AI-generated attacks. If you have experience with natural language processing, data analysis, or even content creation, you already have relevant skills. Security teams need people who think like attackers and understand how AI can be manipulated.

Automated Translation and Global Operations Groups like "Reconnaissance Spider" are using AI to translate their phishing campaigns into multiple languages, dramatically expanding their reach. Sometimes they even forget to remove the AI boilerplate text—a rookie mistake that security professionals learn to spot.

Multilingual security professionals are valuable in this market. If you speak multiple languages and understand cultural nuances, global security teams need these skills to detect and analyze international threat campaigns.

High-Volume Attack Operations North Korea's "Famous Chollima" hacking team uses AI-powered tools to maintain what security researchers call an "exceptionally high operational tempo"—over 320 intrusions annually. They're using AI to automate everything from resume writing for fake job applications to managing video interviews for fraud schemes.

This creates demand for threat intelligence analysts who can track these automated campaigns, security automation engineers who can build defensive systems that scale to match attack volumes, and incident response specialists who understand AI-driven threats.

AI-Powered Ransomware Negotiations Perhaps most concerning, ransomware groups are now deploying AI chatbots to handle negotiations with victims. These bots can operate 24/7, apply psychological pressure, and communicate in multiple languages simultaneously. They're essentially scaling human manipulation through artificial intelligence.

This trend is driving massive demand for digital forensics experts who can analyze AI-generated communications, negotiation specialists who understand both human psychology and AI behavior, and security architects who can design systems to prevent automated extortion.

How Defenders Are Fighting Back (And Where You Fit In)

The defensive side of AI in cybersecurity offers the most career opportunities. Organizations are investing billions in AI-powered security tools, and they need people who can build, deploy, and manage these systems.

Conversational Security Testing Platforms like Pentera are introducing "vibe red teaming"—allowing security professionals to direct penetration tests using natural language. Instead of manually configuring complex attack scenarios, you can literally tell the AI, "Check if credentials can access the finance database," and it builds and executes an attack plan.

Companies need AI security engineers who can design these conversational interfaces, prompt engineers who specialize in security contexts, and security testers who understand both traditional pen testing and AI-assisted methodologies.

API-First Intelligence Platforms Modern security platforms are being rebuilt from the ground up with AI in mind. Every attack technique becomes an individual backend function that AI can access and combine in novel ways. This architecture enables faster development and more adaptive security testing.

DevSecOps engineers who understand both AI APIs and security workflows are in high demand. If you have experience with API development, microservices architecture, or automation frameworks, you have relevant skills that many traditional security professionals are still learning.

Advanced Web Attack Surface Testing AI is revolutionizing how organizations test their web applications. Instead of relying on static vulnerability scanners, AI systems can parse vast amounts of data, understand what attackers are actually looking for (credentials, tokens, API keys), and adapt their testing approaches based on the specific system they're analyzing.

Organizations need machine learning engineers who specialize in security applications, web application security specialists who understand AI-driven testing, and data scientists who can train models to recognize security vulnerabilities.

Validating AI Systems Themselves As more organizations deploy large language models and AI assistants, these systems become high-value targets. Security teams need to test AI applications for prompt injection attacks, data leakage, and context poisoning—entirely new attack categories that didn't exist before.

Organizations need AI security specialists who understand both machine learning and traditional security principles, red team engineers who specialize in AI system attacks, and compliance professionals who understand AI-specific regulatory requirements.

Building Your AI-Security Skill Stack

If you're coming from another tech field, you likely have more relevant experience than you realize. Here's how to bridge the gap:

If You're Coming from Software Development: Your understanding of secure coding practices translates directly to AI security. Learn about prompt injection, model poisoning, and adversarial attacks. These concepts will feel familiar—they're essentially new variations on injection and tampering attacks you already understand.

If You're Coming from Data Science: You have relevant experience that most traditional security professionals are still developing. Focus on learning security-specific applications of machine learning: anomaly detection for threat hunting, behavioral analysis for insider threat detection, and model security for protecting AI systems themselves.

If You're Coming from IT Operations: Your infrastructure and automation experience is incredibly valuable. Modern AI security tools require deep integration with existing IT systems. Learn about security orchestration platforms, automated incident response, and AI-powered security information and event management (SIEM) systems.

If You're Coming from Product Management: Security teams need people who can translate technical AI concepts into business requirements. Focus on learning risk assessment frameworks, compliance requirements for AI systems, and how to communicate AI security risks to non-technical stakeholders.

The Learning Path: Where to Start

Don't try to learn everything at once. Here's a practical progression:

Foundation (Month 1-2): Start with basic cybersecurity concepts through free resources like Cybrary or SANS community courses. Focus on understanding common attack vectors and defensive strategies. You don't need to become a penetration tester overnight.

AI Security Fundamentals (Month 3-4): Learn about AI-specific vulnerabilities through platforms like OWASP's AI Security and Privacy Guide. Understand how traditional security principles apply to machine learning systems.

Hands-On Practice (Month 5-6): Set up lab environments using tools like Damn Vulnerable AI or AI Red Team exercises. Practice identifying AI-generated content, testing AI applications for security flaws, and using AI-powered security tools.

Specialization (Month 6+): Choose your focus area based on your background and interests. Whether it's threat intelligence, security engineering, or AI system security, go deep on the specific skills that align with your career goals.

The Bottom Line

AI and cybersecurity work together to create entirely new career categories. Organizations need people who can think like both attackers and defenders, who understand both AI capabilities and security principles.

If you've been considering a career pivot into cybersecurity, now is the time. The field needs fresh perspectives from people who understand AI, automation, and data analysis. Traditional cybersecurity professionals are learning AI; you get to learn security while already understanding the AI piece.

The AI arms race in cybersecurity continues to accelerate. These jobs will exist in five years—the question is whether you'll be ready to fill them. The market for these skills is strong right now, so it's a good time to start building your expertise.

 

Display real testimonials

Copyright © 2025 My Cool AI Agent - All Rights Reserved.

Powered by

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

Accept