Open infrastructure for trustworthy AI in safety science.

in4r helps toxicology, chemical-safety, biotech/R&D, and regulatory evidence teams turn existing tools and data sources into private, auditable AI workflows. Every source, tool call, assumption, and human review gate can be captured.

Private deploymentMCP-nativeHuman review gatesEvidence bundles
Pilot routeHuman-reviewed
01assess
02pilot
03build
04govern

Scientific track record, public infrastructure, and a practical bounded-pilot model.

Built by Ivo Djidrovski, PhD

Creator/maintainer of public open-source tooling, including ToxMCP modules and O-QT-related workflows

Public proof

Open infrastructure across ToxMCP, O-QT, QSAR, PBPK, exposure, ADMET, and AOP workflows

Pilot-ready stance

Opening pilot and design-partner conversations for review-heavy safety-science workflows

Consulting and infrastructure for scientific teams adopting AI carefully.

Start with one workflow, one accountable team, and one useful artifact. Then decide what should become reusable infrastructure.

Start with one bounded workflow

Turn one real case study into an inspectable AI workflow.

Bring a workflow where speed matters, but traceability, privacy, and expert review cannot be optional.

QSAR / read-acrossLiterature-to-evidence synthesisChemical safety evidenceExposure-to-PBPK handoffInternal SOP or tool integration

Example: connect OECD QSAR Toolbox outputs, profiler alerts, analogue rationale, literature context, and expert review into one evidence bundle with provenance and audit logs.

Infrastructure you can inspect, reuse, and build on.

in4r is not starting from a blank slide deck. The public ToxMCP suite shows the practical direction: modular MCP servers for safety-science tools, evidence sources, and reviewable workflows.

View the infrastructure page

ToxMCP Suite

PUBLIC REPOS

A public module map for chemical safety, toxicology, QSAR, exposure, PBPK, ADMET, and mechanistic reasoning workflows.

Suite hub

Paid pilots and consulting help turn this infrastructure into stable, documented, reusable open-source tools.

O-QT assistant

Start with one case-study pilot or advisory retainer.

Bring one safety-science or research case study. We will define the pilot boundary, consulting scope, evidence sources, review gates, and deployment path before scaling anything. We are opening pilot and design-partner conversations for teams operationalizing AI in review-heavy scientific workflows.

01Case study
02Consulting scope
03Build path
What you get from a pilot
Pilot protocol
Working prototype
Evidence bundle: PDF, tables, figures
Audit-log example
Uncertainty + failure-mode readout
Build/governance roadmap
Book a pilot scoping call

Tell us what case study, workshop, or workflow you want to improve.

Private inbox

Please do not submit confidential data through this form. We use your details only to respond to your inquiry. Workflow details are treated confidentially. See the privacy note.