Connect with us

News

Don’t get lost – How femtech can navigate the EU medical device and AI rules

Published

on

By Xisca Borrás and Ellie Handy of the life sciences regulatory department at Bristows law firm

Femtech, short for female technology, is an important and fast growing sector. The EU is a key market for femtech, with five of the top 10 countries for femtech investment located in the EU.

Femtech products are developed for many areas of women’s health, such as menstrual health, pregnancy planning and monitoring, menopause and mental wellbeing.

As femtech is intrinsically linked to health needs, a key question for femtech products is whether they are regulated as medical devices or merely consumer products.

Additionally, many femtech products are embracing the use of artificial intelligence (“AI”). Therefore, another key question is whether products using AI will be regulated as “high-risk” AI systems under the EU’s new AI legal framework.

This article looks at when femtech apps and software qualify as medical devices in the EU and how the medical device and AI legal frameworks interact.

What is a software medical device?

The definition of “medical device” in the EU’s Medical Device Regulation 2017/745 (the “EU MDR”) includes software, used alone or in combination, that is intended by its legal manufacturer for a medical purpose. These medical purposes are listed in the EU MDR and include (amongst others):

  • diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of disease;
  • diagnosis, monitoring, treatment, alleviation of, or compensation for, an injury or disability; and
  • control or support of conception.

The legal manufacturer is the person that puts their name/branding on the device, and takes responsibility for it.

Whether software is considered a medical device will depend on whether the manufacturer states it has a medical purpose in the relevant documentation/materials.

The EU MDR defines intended purpose as “the use for which a device is intended according to the data supplied by the manufacturer on the label, in the instructions for use or in promotional or sales materials or statements and as specified by the manufacturer in the clinical evaluation”.

What is the test for qualifying as a medical device in the EU?

There is a selection of guidance documents that can assist you in determining whether a product should qualify as a medical device. We summarise some of the key guidance below:

  1. MDCG 2019-11 rev.1 

Under the EU MDR, the Medical Device Coordination Group (“MDCG”) has published guidance on the qualification and classification of software as a medical device. It sets out five decision steps to help determine if a piece of software is a medical device in the EU. The steps are:

  • Step 1: Is the product software?
  • Step 2: Is it standalone software (i.e., it is not an accessory nor driving/influencing the use of a hardware device) and does it not fall within Annex XVI?
  • Step 3: Is it performing an action on data beyond storage, archival, communication, simple search or lossless compression?
  • Step 4: Does it act for the benefit of an individual patient?
  • Step 5: Does it have a medical purpose (as set out in the medical device definition)?

If the answer to all five questions is yes, it will qualify as a medical device. In this case, manufacturers will have to ensure they comply with the pre-market requirements set out in the EU MDR before they can place the software medical device on the market.

Notably, they will need to set up a qualify management system, compile a technical file, undergo the appropriate conformity assessment and affix a CE mark.

Importantly, the manufacturers would also need to consider post-market requirements, such as having a post-market surveillance system and undertaking post-market vigilance.

3. Other relevant guidance

The MDCG has also published a manual on borderline and classification of medical devices under the EU MDR.

Additional sources of guidance may also be available from national competent authorities. The legal manufacturer could also look at examples of other products already on the market to see how they are regulated (e.g. looking at EUDAMED). Although, we would caution anyone relying too heavily on the regulation of other products as there is no guarantee they are compliant.

What if you’re not a medical device?

If the software does not qualify as a medical device, the product will not have to comply with the EU MDR.

However, the manufacturer should be careful about how it promotes its product and the claims it makes about it because, as discussed above, a medical device is defined based on the manufacturer’s intended purpose.

Let’s take the example of a mere period app. Using it for logging period dates, tracking ovulation, and predicting future cycles has no medical purpose and is therefore not a medical device.

However, if its manufacturer recommends this piece of software for contraception and/or to support conception it will suddenly have a medical purpose and so, it would qualify as a medical device.

As such, the manufacturer would either have to bring the device into conformity with the EU MDR or take action to change the promotional materials to remove the medical claims.

Interaction between medical devices and AI legal frameworks 

Under the EU MDR, devices are assigned risk classifications. For the lowest risk devices (Class I medical devices), the manufacturer can self-certify compliance with the EU MDR prior to the product being placed on the market or put into service in the EU.

However, high risk devices (Class IIa or above medical devices) must undergo a third party conformity assessment carried out by a notified body.

Notified body conformity assessments require a detailed review of the manufacturer’s quality management system, technical documentation, systems and procedures.

The process will often take more than a year to complete. Additionally, manufacturers have to grapple with ongoing burdens such as vigilance and post-market surveillance.

Under the EU MDR, most software as a medical device will be classified as a Class IIa or above.

Like the EU MDR, the EU’s Regulation (EU) 2024/1689 (the “AI Act”) also distinguishes between AI systems that pose different levels of risk.

The AI Act imposes onerous obligations on “high risk” AI systems, including in relation to accuracy, transparency, risk management, data quality and governance, and human oversight.

Although there is some overlap between the EU MDR and AI Act requirements, many are new AI-specific obligations. These pose a significant additional regulatory burden, increasing the complexity and cost of compliance for stakeholders.

Notably, the risk classification of an AI system that is itself, or is included in, a medical device is linked to the device’s classification under the EU MDR. Under the AI Act, AI systems are classified as “high risk” systems if:

(a) the AI system is a safety component of a medical device or the AI system itself is a medical device; and 
(b) the medical device is required to undergo a third-party conformity assessment under the EU MDR.

Therefore, low risk medical devices (i.e., Class I medical devices) that are self-certified cannot be “high risk” AI systems.

Whereas, any device that requires a notified body to perform its conformity assessment will be a “high risk” AI system, and so will be subject to the additional AI Act requirements.

Unfortunately for those wishing to avoid the “high risk” AI system requirements, there are relatively few Class I devices under the EU MDR.

Therefore, the majority of medical devices that are an AI system or have an AI system as a safety component will qualify as a “high risk” AI system.

One notable example of a Class I device is software intended to support conception by calculating the user’s fertility status based on a validated statistical algorithm.

If this kind of software medical device is also an AI system, it would not be classed as a “high risk” AI system, so it would not be subject to the more onerous requirements in the AI Act.

However, the manufacturers of these devices would need to carefully consider any product developments that add additional functionality, as this can impact the risk classification of the product under both the EU MDR and AI Act.

For example, if the manufacturer added functionality to the Class I device so it could also be used as a means of contraception, it would become a Class IIb medical device and would need a third party conformity assessment.

In turn, as the software is also an AI system, this would mean the AI system would be considered “high-risk” and be subject to additional regulatory requirements under the AI Act.

Whilst AI has the potential to provide tremendous benefits for femtech, it also triggers additional complexity that can be time-consuming and costly to navigate.

It is important to get it right in terms of compliance in order to maintain consumer trust, avoid regulatory penalties, and pave the way for long-term success and viability.

By Xisca Borrás, Partner – Life sciences regulatory and  Ellie Handy, Senior Associate – Life sciences regulatory at Bristows law firm.

Continue Reading
3 Comments

3 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Motherhood

What Maternal Mental Health Month reveals about where postpartum support actually breaks down

Published

on

By Morgan Rose, chief science officer at Ema, and Lauren Scocozza, vice president of product at Willow

May is Maternal Mental Health Month, and every year it surfaces a familiar set of statistics: 1 in 5 new mothers experiences postpartum depression or anxiety, most go unscreened, and the majority who are screened don’t receive adequate follow-up care.

The conversation is important. But the numbers obscure something that anyone who has worked in this space knows to be true: postpartum mental health distress rarely arrives with a label.

It arrives as exhaustion. As “I’m not sure I’m doing this right.”

As a question about supply, pumping, whether it’s okay to feel this disconnected from something you were supposed to love immediately.

Willow integrated Ema, AI built for women’s health, with the goal of closing the maternal care and data gap.

The pattern mentioned above appears consistently in Ema’s conversational data through the Willow app.

A mother reports mastitis symptoms.

Ema walks her through the clinical presentation, confirms she should keep pumping, and then she questions if she is using her pump correctly. In the same thread, within a few exchanges, she says she’s “feeling too sad.” Then: “I don’t know. I think I’m depressed. I am not enjoying my postpartum.”

She did not come to the app to talk about her mental health.

She came about a breast infection. The mental health disclosure came through the already-opened door.

The Weight Underneath the Technical Question

New motherhood involves an enormous amount of problem-solving at a time when cognitive and emotional reserves are depleted. The pump has to work. The baby has to eat. The body has to recover.

Work comes back. Sleep doesn’t. Feeding their babies requires skill, and the learning curve sits atop it all.

What Ema’s conversation data shows is that the emotional load of navigating these challenges is not separate from mental health. It is mental health.

When a mother writes, “I’m postpartum and overwhelmed and tired,” and then, in the same breath, asks about flange sizing, she is telling us what the postpartum experience actually feels like from the inside.

The technical question and the emotional state are one and the same.

Breastfeeding carries particular weight here.

The desire to breastfeed, the guilt when it doesn’t go as planned, and the identity questions that come with feeding choices are not peripheral to the postpartum mental health conversation.

In our conversations, women navigating supply concerns often reveal deeper anxieties: about whether they are good mothers, whether their bodies are “working,” and whether the difficulty they are experiencing means something about them.

These are the signals worth asking about.

What Screening Looks Like in Practice

Ema is trained on the Edinburgh Postnatal Depression Scale and is equipped to offer the EPDS when a conversation warrants it.

The value is being present for the moment when a woman is ready to name what she’s feeling.

That moment rarely comes as a direct request for mental health support. It comes when someone is already in a conversation about something else, and something shifts.

A woman dealing with mastitis says she feels sad. A woman worried about supply says she doesn’t feel like herself. A woman managing the logistics of going back to work with a wearable pump says she’s not sure she can keep up with it all — and the “it all” isn’t about the pump.

Ema is designed to hear that. She doesn’t stay on the clinical or technical track when the conversation moves. She follows the person.

And when the moment is right, she offers the screening as a natural next step.

In one exchange, a woman was offered the EPDS after disclosing depressive feelings. She declined.

Ema acknowledged that and asked if she wanted to talk about something else. That’s the right response. The offer was made without pressure. The door stays open.

Sometimes what matters most is that someone asked at all.

The Continuity Problem

One of the most persistent structural failures in maternal mental health care is fragmentation.

A woman sees her OB at six weeks postpartum for a brief screening. She may get a call from a nurse. She may be given a referral she never follows up on because she doesn’t have the capacity to navigate a new care relationship while managing a newborn.

The clinical touchpoints are too few, too far apart, and too often siloed from one another.

The postpartum period lasts far longer than the six-week checkup implies. Mental health symptoms can emerge weeks or months after delivery, shift in character over time, and interact with physical challenges in ways that don’t fit neatly into any single provider’s lane.

A lactation concern becomes an anxiety spiral. A supply drop triggers a grief response. A difficult return to work surfaces a postpartum depression that wasn’t fully recognized at six weeks.

Ema sits inside these moments because she’s embedded in the platform women are already using. She doesn’t require a separate appointment, a referral, or the cognitive bandwidth to seek out a new resource.

She’s in the Willow app that mom is already using multiple times a day to manage her pump.

When Ema identifies a woman who may need more support than she can provide, she routes to the right resource — whether that’s a SimpliFed lactation consultant for feeding-related concerns or a clinical professional for mental health follow-up.

The conversation leads to the handoff with someone who can do more.

What the Month of May Means for the Rest of the Year

Maternal Mental Health Month is a useful moment of attention. The awareness campaigns, the social media posts, and the statistics shared in newsletters matter.

But the gap in postpartum mental health care is not really an awareness problem.

Most people in the perinatal space and beyond know the statistics. The problem is access, timing, and continuity.

AI doesn’t close that gap on its own.

What it can do is be present in the spaces where women already are, at the times when they need something, and attentive enough to recognise that a conversation about a pump, a clogged duct, or a supply concern is also a conversation about how someone is doing.

The question behind the question is often the more important one.

For Willow, the conversation data Ema generates is a map of where mothers are struggling, what they reach for when they need help, and when they are ready to say more than they came to say.

That information, used well, shapes better resources, better onboarding, and a more connected experience across the full arc of the postpartum year and beyond.

Building the infrastructure to support maternal mental health is a year-round project.

Willow is doing one part of that, and the conversations happening on the Willow platform every day are evidence that women want support that meets them where they are… in their app, in their moment, without having to ask for it twice.

About the authors

Morgan Rose is Chief Science Officer at Ema, an AI platform for women’s health. Ema partners with healthcare organisations and femtech companies to deliver clinically grounded AI support across the perinatal journey.

Lauren Scocozza is the Vice President of Product at Willow Innovations, Inc. For women by women, Willow is building a maternal care platform to address the interconnected challenges of postpartum.

Continue Reading

Insight

Online abuse and deepfakes ‘pushing women out of public life’

Published

on

Deepfakes, AI-assisted rape and unwanted advances are pushing women out of public life, a report has found.

Online violence against women in public life is becoming increasingly technologically sophisticated, with perpetrators able to use AI tools to fabricate intimate images of their targets.

Survey responses suggest these attacks are often deliberate and coordinated, aiming to silence women in public life while undermining their professional credibility and personal reputations.

The report, “Tipping point: Online violence impacts, manifestations and redress in the AI age”, was published by UN Women and produced in partnership with City St George’s, University of London, and TheNerve, a digital forensics lab founded by Nobel laureate Maria Ressa.

It analysed the experiences of 641 women journalists and media workers, activists and human rights defenders from 119 countries. The women were surveyed between 27 August and 13 November 2025.

Researchers found that 27 per cent of women respondents were targeted with unsolicited sexual advances via direct message, receiving unwanted intimate images, “cyberflashing”, sexual innuendos or non-consensual sexting.

A further 12 per cent had their personal images, including those of an intimate nature, shared without their consent, while 6 per cent had been subjected to deepfakes or manipulated images and videos.

The impacts included an alarming rate of mental health diagnoses and self-censorship. Nearly one-quarter, or 24 per cent, of respondents had experienced anxiety and/or depression linked to online violence, while 13 per cent reported being diagnosed with post-traumatic stress disorder, or PTSD.

The findings also pointed to widespread self-censorship, with 41 per cent of respondents saying they self-censored on social media to avoid being abused, and 19 per cent doing so at work.

The study found that while 25 per cent of respondents had reported incidents of online violence to the police and 15 per cent had taken legal action, justice still eluded them. Some 24 per cent of the women who had reported online violence felt victim-blamed by the police, having been asked questions such as “What did you do to provoke the violence?” The same proportion said the police made them feel responsible for shielding themselves from further violence.

Julie Posetti, professor of journalism and chair of the Centre for Journalism and Democracy at City St George’s, is the project’s principal researcher and the report’s lead author.

She said: “AI-assisted ‘virtual rape’ is now at the fingertips of perpetrators. This phenomenon accelerates the harm from online violence inflicted on women in public life.”

“This violence serves to fuel the reversal of women’s hard-won rights in a climate of rising authoritarianism, democratic backsliding and networked misogyny.”

“The rollback of women’s rights is enabled and exacerbated by technologies which, by design, amplify misogynistic hate speech for profit.”

Co-author Lea Hellmueller, associate professor of journalism and associate dean for research and innovation at City St George’s, added: “The chilling effect of online violence is pushing women out of public life.”

“Law enforcement is outsourcing the responsibility for protection to the survivors by telling women to remove themselves from social media, to avoid speaking publicly about controversial issues, to move into less visible roles at work, or to take leave from their respective careers.”

“This shows that avoidance techniques, self-censorship or quitting, are still significantly more likely to be used by women rather than resistance techniques such as reporting online attacks to the police.”

Pauline Renaud, lecturer in journalism at City St George’s and fellow co-author of the study, said: “Going to the police or taking legal action do not necessarily lead to justice for survivors.”

“We need more effective education and training of law enforcement and judicial actors to support action in cases of technology-facilitated violence against women and girls.”

“This needs to be matched by political will to effectively regulate Big Tech companies that use their outsized financial and political power to undermine progress in this area.”

Continue Reading

Fertility

GLP-1 drugs do not increase pregnancy risks, study finds

Published

on

GLP-1 drugs taken before conception were not linked to higher pregnancy risks in new research, which suggested they may even offer some protection.

Women of reproductive age are increasingly prescribed GLP-1 drugs for weight-management support, but the risks and benefits of using them before pregnancy remain poorly understood.

The findings support continuing the use of GLP-1 medicines in women with metabolic risk factors who are considering pregnancy, said Cara Dolin, a maternal-fetal medicine specialist and co-author of the research, which was presented at the Society of Maternal-Fetal Medicine pregnancy meeting in February 2026.

“While there’s more research to be done, this data provides some reassurance that it is not harmful to be taking a GLP-1 if you’re planning a pregnancy, and that having done so may in fact benefit you by optimising your preconception metabolic health.”

The researchers examined electronic medical records for patients with a pre-pregnancy BMI of more than 30 who delivered at more than 20 weeks’ gestation. The data were reviewed for two studies: one assessed the link between pre-pregnancy GLP-1 use and the risk of gestational diabetes, while the second looked at the risk of severe maternal morbidity in patients with obesity.

Women with obesity, diabetes, cardiovascular disease and other cardiometabolic disorders have a higher risk of pregnancy complications including preeclampsia, gestational diabetes, stillbirth, caesarean section and other outcomes. While GLP-1 medicines can help manage these conditions, they are contraindicated during pregnancy, and women are typically advised to stop the medication two months before trying to conceive.

However, stopping the drugs can often lead to rebound weight gain or worsening metabolic health. A 2025 study suggested this rebound worsened some pregnancy outcomes, but the risks and benefits are still poorly understood, Dolin said.

“There is a lot we just don’t know, which is why we wanted to look at our experience here with our Cleveland Clinic patients and see whether taking GLP-1 drugs before pregnancy was causing harm or if it was beneficial and helping patients have healthier pregnancies.”

Researchers analysed data for more than 8,000 women who had obesity but did not have diabetes before they became pregnant. They compared outcomes for 208 women who had been prescribed GLP-1 receptor agonists before pregnancy with those who had not been prescribed the medication.

Women in the GLP-1 group had more risk factors heading into pregnancy. They tended to be older and have a higher body mass index, higher rates of bariatric surgery and chronic high blood pressure, and present earlier for prenatal care.

However, outcomes for the two groups were similar. Researchers found that the GLP-1 group did not have higher rates of gestational diabetes, severe maternal morbidity or other adverse maternal outcomes, suggesting that the medication may have helped mitigate elevated risk factors.

“I think this is a really important signal, and it may reflect that these patients were able to optimise their metabolic health prior to conception.”

“It shows there’s potential to use these drugs in a more targeted way with patients who are planning a pregnancy and have these different comorbidities and obesity.”

While the findings suggest that using GLP-1 drugs before pregnancy may be beneficial in women with metabolic risk factors, having a plan to stop the medicines before conception is essential, Dolin noted. In some cases, patients may be moved to an alternative medication that is safe for pregnancy and can be used to help manage their metabolic health during pregnancy.

Providers with patients who are taking GLP-1 medicines and planning a pregnancy should consider referral to a maternal-fetal medicine specialist for pre-pregnancy counselling.

“We can have a nuanced conversation with the patient about taking the medication, what the benefits are, what the potential risks are, and help them formulate a plan to transition off the medication once they’re ready to start trying to conceive,” she said.

Continue Reading

Trending

Copyright © 2025 Aspect Health Media Ltd. All Rights Reserved.