Stefan Manja Internal AI systems for enterprise workflows
00 / overview

Internal AI systems delivery

Belgrade / international

Internal AI systems that hold up in real use

I work with enterprise and mid-market teams to build, evaluate, and harden internal AI systems for real workflows - with disciplined deployment, clear system behavior, and delivery choices that hold up after launch.

Operating brief

Focus
Internal assistants, knowledge-access workflows, and analyst-support systems that need to hold up after launch.
Position
Project-based builder and advisory partner, not a broad AI-transformation pitch.
Reply window
Good-fit inquiries get a response within 3 business days.

Outcome

~75% faster

credit-risk workflow analysis

LLM-assisted workflow design focused on analyst usefulness instead of demo value.

System

Self-hosted assistant

enterprise knowledge-access build

Dockerized Flask backend, PostgreSQL + pgvector retrieval, LangChain ingestion, and Azure AD SSO.

Scale

50+ use cases

from PoCs to first scaled delivery

AI work spanning finance, operations, logistics, and adjacent teams, from opportunity assessment through PoCs to rollout.

01 / selected work

Selected work

Selected proof first: concrete delivery choices, system shape, and outcomes that mattered in real environments.

Featured

A Delta Holding workflow redesign that used LLM assistance to reduce analysis time by roughly 75% while staying grounded in an existing analyst process.

Context
Enterprise finance and credit workflow context inside a diversified holding company.
Outcome
~75% faster analysis, with 4.4 out of 5 analyst-rated quality and over 90% recommendation acceptance.
PythonLLM APIsWorkflow designInternal data context
Featured

A self-hosted internal assistant for Delta Holding, built to support enterprise knowledge access with practical architecture choices around hosting, retrieval, and identity.

Context
Internal enterprise knowledge-access and assistant use cases across a large business environment.
Outcome
A production-minded, self-hosted internal assistant with Dockerized services, PostgreSQL + pgvector retrieval, LangChain ingestion, and Azure AD SSO.
FlaskDockerPostgreSQLpgvectorLangChainAzure AD SSO
02 / context

About Stefan

Current enterprise context, earlier delivery proof, and a bias toward systems that hold up in real use.

I currently work in an enterprise and industrial context and previously spent four years at Delta Holding, where the work moved from individual systems toward first scaled AI delivery across multiple business units. The common thread is not trend-following. It is building internal systems that are worth relying on after the demo ends.

Current context

Applied AI in an enterprise and industrial environment.

Previous proof base

Delta Holding work across internal assistants, credit workflows, and AI delivery from opportunity assessment through PoCs to first scaled rollout.

Operating bias

Evaluation, deployment discipline, and systems that remain useful after the initial excitement wears off.

03 / services

How I help

Project-based build first, with hardening and advisory work where they improve delivery.

The main fit is an internal workflow with real users, a concrete owner, and a reason the system needs to hold up after launch. The service page goes deeper on project shapes and engagement flow.

See project shapes →

Primary mode

Build

Project-based delivery of internal AI systems for enterprise and mid-market teams with a concrete workflow to improve.

  • • Internal assistants, knowledge-access systems, and analyst-support workflows
  • • Evaluation-gated first versions built for real users
  • • Ownership from scoped workflow to shipped implementation

Secondary mode

Productionize

Take a prototype or pilot already in motion and make it more reliable, testable, observable, and ready for real use.

  • • Evaluation and failure-mode hardening
  • • Deployment and monitoring readiness
  • • Prototype-to-production cleanup

Secondary mode

Advise

Scoped advisory work that sharpens system shape, delivery path, and implementation risk before or alongside build work.

  • • Architecture and workflow review
  • • Use-case, ownership, and delivery scoping
  • • Implementation risk and handoff planning
04 / public proof

Working style

A public proof point for the working style behind the delivery.

The Agentic Development Playbook is not the main offer on this site. It is public evidence of how I keep AI implementation scoped, reviewable, and evaluation-gated once work becomes repo-level and real delivery starts.

Public artifact, not a private claims page: scoped tasks, verification before commit, repo files as the source of truth, and a light PoC/evaluation path when early work still needs decision-grade evidence.

View the Playbook →
05 / fit

Fit guidance

The strongest work starts with a concrete workflow, real users, and a reason the system needs to hold up after launch.

This is a better fit for scoped build, hardening, or advisory work than for open-ended AI exploration without an owner, workflow, or adoption path.

Good fit

  • • Teams with a concrete internal workflow, real users, and a reason the system needs to hold up after launch.
  • • Internal assistants, knowledge-access systems, and analyst-support tools that need disciplined evaluation and delivery.
  • • Project-based build, hardening, or advisory work with clear ownership and real business pull.

Not a fit

  • • Pure idea-stage exploration with no real workflow owner or adoption path.
  • • Generic “AI transformation” requests without a defined problem to solve.
  • • Marketing-style chatbot work where visual novelty matters more than operational quality.

Project inquiry

If the workflow is concrete enough to describe, I can usually tell quickly whether a build or advisory conversation makes sense.

I take on project-based build, hardening, and advisory work for internal AI systems. The strongest first conversations are about a real workflow, real users, and the constraints that will decide whether the system actually gets adopted.