0 %
khajavi.tech technical services
Product, data and engineering working together

Database Performance & Scalability

SQL Server & Modern .NET Data

BI, ETL & Analytics Load

From Firefighting to Predictable Load

Database Scalability Services by Khajavi

Database Performance & Scalability by khajavi.tech

Every feature, report and dashboard in your product quietly depends on one thing: how fast and reliable your data layer is. When the database is healthy, people barely think about it. When it is not, everyone feels it: timeouts, blocked checkouts, frozen dashboards and late-night firefighting. At khajavi.tech, led by Mehdi Khajavi, we treat Database Performance & Scalability as a core part of your product and growth engine – not an afterthought.


We work with teams that have grown beyond the “it works on my machine” phase. Traffic has increased, data has accumulated, new features have been added on top of old ones and somewhere in the middle of it all your database is struggling. Our role is to untangle this situation, make performance predictable again and design a path that lets you scale without rewriting everything from scratch.

Why Database Performance & Scalability matter so much

Database issues rarely show up as “database issues” in your users’ minds. They show up as slow pages, broken exports, inconsistent reports and support tickets. Slowness in the data layer impacts:

  • Customer experience – slow checkouts, timeouts in dashboards, laggy internal tools.
  • Engineering speed – developers afraid to touch parts of the schema or certain queries.
  • Data & BI – ETL jobs running late, reports out of sync, analytics that teams stop trusting.
  • Growth – marketing campaigns that must be limited because the platform “cannot handle it.”
  • Costs – scaling hardware blindly instead of fixing real bottlenecks.

The challenge is that database performance is rarely caused by one dramatic mistake. It comes from years of small tradeoffs: quick fixes, missing indexes, unbounded tables, features built without understanding how queries behave under load. Our work is to respectfully dig into that history, understand what is really going on and guide you from reactive firefighting to a more stable, deliberate approach.

Typical symptoms that it is time to act

If you recognise yourself in any of the following, you likely have a database performance and scalability problem – or will have one soon:

  • Pages that sometimes load fast and sometimes hang for several seconds without any obvious reason.
  • Spikes in CPU or I/O whenever certain reports, exports or background jobs run.
  • Deadlocks or lock timeouts appearing sporadically and disappearing when you try to debug.
  • Developers avoiding certain queries or endpoints because “they are fragile” or “they always time out.”
  • Deploys that must be done in off-hours because schema changes are risky and painful.
  • Unclear separation between OLTP (live traffic) and analytical workloads, with BI competing for the same resources.
  • Cloud database bills that feel high even though performance is still not where it should be.

None of these problems are unique to you. They are side effects of success: more users, more data, more features. The question is how you respond. Throwing more hardware at the problem works only up to a point. At khajavi.tech, we help you go deeper, with a structured approach that respects your current stack, team and roadmap.

How khajavi.tech approaches Database Performance

We do not start with a generic checklist. We start with how your system actually behaves in the real world. Our work combines engineering, data and product thinking:

  • Observation – what is slow, when, for whom and under what conditions?
  • Diagnosis – which queries, workloads and patterns are responsible and why?
  • Design – what changes will give you the biggest impact with the least risk?
  • Implementation – how do we roll out improvements safely and measure the result?
  • Long-term direction – what architectural changes are needed to scale comfortably over the next years?

Depending on your stack, this can involve anything from deep SQL tuning and index design to caching strategies, schema refactors, partitioning or introducing specialised data stores for particular workloads. We are particularly familiar with SQL Server, modern .NET stacks and environments where application data also powers BI and analytics.

Schema and data modelling
Indexing strategy
Query and ORM behaviour
Transactions, locks and concurrency
Infrastructure & configuration
Schema & data modelling

1. Schema & data modelling

A good schema reflects how your business and product work. A strained schema accumulates workarounds and tables that try to do too many things at once. We look at:

  • Whether tables represent clear entities and relationships or have grown into catch-alls.
  • Where normalisation reduces duplication – and where pragmatic denormalisation helps.
  • Column types and sizes that affect storage, memory usage and index effectiveness.
  • Patterns like polymorphic associations or generic “data bags” that complicate queries and indexing.

The goal is not to redesign everything; it is to find targeted schema corrections that unlock outsized gains.

Indexing strategy

2. Indexing strategy

Indexes are powerful, but not free. Systems often suffer from missing critical indexes or from too many overlapping ones. We review:

  • Which queries actually run under load and what access patterns they imply.
  • Where composite, covering or filtered indexes can help, especially for common filters.
  • How index maintenance and fragmentation are handled in busy transactional databases.
  • Which redundant or rarely used indexes can be retired to speed up writes.

We focus on indexes that support real workloads and align with where your data is heading.

Query & ORM behaviour

3. Query & ORM behaviour

Even a good schema can be undermined by inefficient queries. ORMs help move fast, but can hide problems. We look for:

  • Queries that fetch far more data than they need, especially on hot paths.
  • N+1 patterns where loops in code repeatedly hit the database.
  • Unbounded queries without sensible limits or pagination.
  • Misuse of ORM features such as lazy loading in critical flows.

We are comfortable in both raw SQL and ORM optimisation — using each where it makes most sense.

Transactions and locks

4. Transactions, locks & concurrency

Teams often encounter locking and deadlocks as traffic grows. Long-running transactions touching hot tables can stall everything. We examine:

  • How transactions are scoped in code and what they actually protect.
  • Whether isolation levels are appropriate or overly strict.
  • “Read-then-write” patterns that cause conflicts under concurrency.
  • Batch jobs or maintenance tasks that hold locks at bad times of day.

Often, small changes to how and when work happens dramatically reduce contention without changing logic.

Infrastructure and configuration

5. Infrastructure & configuration

Hardware and configuration choices matter once you have tuned schema and queries. We look at:

  • Resource allocation – CPU, memory, I/O, storage type and configuration.
  • Connection pooling and limits from the application side.
  • Database configuration options that affect performance characteristics.
  • Backups, maintenance tasks and their impact on normal operations.

We do not treat infrastructure as a magic fix, but we do respect its role in a stable, fast system.

A single fast query is nice; a system that stays fast as data and traffic grow is better. Scalability is about behaviour at the edges: peaks, seasonal spikes, long-term growth and unexpected surges.

Designing for scalability, not just speed

Scale up vs. scale out

Scaling “up” (bigger machines) is simple to think about and sometimes the right short-term move. Scaling “out” (distribution, replicas, partitions) requires more design but gives better long-term flexibility. We help you:

  • Understand where you are in the lifecycle and what options are realistic now.
  • Decide when scaling up is still correct and when structural changes are needed.
  • Plan transitions incrementally, not a risky jump from monolith to “fully distributed.”

Caching strategies

Caching can be an incredible amplifier – or a subtle source of bugs. We design caching with intention:

  • Which data can be safely cached and for how long.
  • Where to cache: in-app, shared cache (e.g. Redis), or specialised layers.
  • How to invalidate or refresh caches predictably.
  • How to avoid caches hiding serious underlying query issues.

Read replicas & offloading work

Offloading read traffic to replicas can relieve pressure on the primary database, especially for reporting and dashboards. We help you:

  • Identify which workloads can safely tolerate replication lag.
  • Route read traffic thoughtfully without breaking data expectations.
  • Move BI, exports and background jobs to replicas instead of the primary where appropriate.

Partitioning & sharding

Partitioning and sharding are heavy tools that should be used deliberately. We work with you to:

  • Decide whether partitioning by time, tenant, region or other keys makes sense.
  • Understand how queries must change to take advantage of partitions.
  • Plan migration with minimal disruption and clear rollback paths.

In some cases, simply partitioning very large time-series or log tables can significantly improve performance without a full architectural overhaul.

Performance for analytics, BI & ETL

Many modern products do not just serve end-user traffic; they also feed data to analytics, BI dashboards and external reporting. At khajavi.tech, this is a natural part of our work, because we also design BI, ETL and data platforms.

Separating OLTP and OLAP concerns

Transactional databases and analytical workloads have different needs:

  • OLTP cares about low-latency, high-concurrency single-record operations.
  • OLAP cares about scanning and aggregating large volumes of data.

When both are forced into one database without a plan, performance suffers. We help you:

  • Decide where a separate data warehouse or analytical store makes sense.
  • Design ETL/ELT pipelines that keep analytical data fresh without overloading production.
  • Use incremental loads, materialised views or pre-aggregations where appropriate.

Reliable ETL & data flows

ETL and data sync jobs often become invisible sources of load and instability. We review:

  • When jobs run and how they align with traffic patterns.
  • Which jobs perform large unbounded scans or heavy joins on the primary.
  • How failures and retries are handled to avoid repeated heavy workloads on error.

We aim to make data flows predictable, observable and respectful of live traffic, avoiding a situation where BI and reporting quietly compromise user experience.

Modernising legacy database setups

Many teams run on setups that were reasonable in the past but now limit what is possible: older SQL Server versions, tight coupling between app and database, or schemas shaped by decisions that no longer match the product. We help you modernise without pretending you can pause everything and start over.

Typical modernisation work includes:

  • Upgrading database versions and configurations safely.
  • Cleaning up unused tables, columns and stored procedures that confuse everyone.
  • Gradually introducing better patterns for schema, indexing and query design.
  • Preparing the system for future architectural changes, even if you are not ready yet.

The aim is to make your current system a stable base for future growth, not to demand a risky big-bang rewrite.

How our engagement typically works

Discovery & Assessment
In-depth Performance Audit
Implementation of Improvements
Observability & Guardrails
Long-term Scaling Strategy
Measure and Iterate

1. Discovery & high-level assessment

We begin with a conversation about your product, current pain points and constraints. We look at metrics, monitoring dashboards, incidents and any prior performance work. This frames the problem and suggests a focused initial scope.

2. In-depth performance audit

In the audit phase we:

  • Review database metrics, slow query logs and relevant traces.
  • Analyse schema, indexes and key queries in critical flows.
  • Examine ETL, BI and reporting workloads that hit your databases.
  • Identify quick wins and deeper structural issues.

The outcome is a clear view of what is happening now and a prioritised list of changes with estimated impact and risk.

3. Implementation of high-impact improvements

Next we work with your team to implement improvements. This can include:

  • Query tuning and index changes.
  • Refinement of schema areas that cause repeated problems.
  • Adjustments to caching, connection pooling and configuration.
  • Changes to ETL schedules and patterns.

We prefer iterative changes with measurement at each step, instead of large opaque restructures.

4. Observability, guardrails & runbooks

Performance work is only sustainable if you can see what is going on and respond early. We help you:

  • Define metrics and alerts that actually matter.
  • Set thresholds that reflect your context, not generic defaults.
  • Create simple runbooks for common incidents so on-call engineers are not guessing.

This moves you from “we hear about problems from customers” to “we see early signals and act first.”

5. Long-term scaling strategy

Finally, we work with you to design a roadmap for the next 12–24 months of growth:

  • What data growth you expect and how it will impact performance.
  • Which architectural changes (replicas, warehouses, partitions, sharding) make sense when.
  • How to keep performance and scalability in mind as you ship new features.

This does not lock you into a rigid plan; it gives you a realistic view of how the system will behave as you grow.

Who this service is for

Database Performance & Scalability services at khajavi.tech are a strong fit if:

  • Your product is already in production with real usage and data.
  • Performance issues have started to impact users, teams or growth plans.
  • You want to fix root causes rather than endlessly upgrading hardware.
  • You care how database performance connects to BI, analytics and product decisions.

We work with SaaS products, B2B platforms, internal tools, data-heavy web apps and organisations where the database is central to day-to-day operations.

Working with khajavi.tech

khajavi.tech is a founder-led studio, so you work directly with senior expertise – not a rotating cast of intermediaries. You can engage for a focused audit and remediation, a project around a specific migration or scaling challenge, or an ongoing partnership where we continuously refine your data layer alongside your product.

The common thread is pragmatic depth: we go as deep as needed technically while keeping the conversation grounded in your business reality. The goal is simple: fast, predictable and scalable data systems that let your product, teams and customers breathe.




Database Performance & Scalability – Frequently Asked Questions

1. What does “Database Performance & Scalability” mean at khajavi.tech?

For us, database performance and scalability cover everything from how quickly individual queries respond to how your system behaves as data and traffic grow over time. It includes schema and index design, query tuning, caching strategies, transaction patterns, infrastructure configuration, ETL and BI workloads and the way all of these interact under real user load.

2. Which database technologies do you usually work with?

We work most often with relational databases such as SQL Server, PostgreSQL and MySQL in cloud and on-premise environments. Because khajavi.tech also focuses on .NET and modern web applications, SQL Server shows up a lot, but the principles we apply are similar across engines. When specialised stores are needed for particular workloads, we design around that too.

3. What are common signs that our database is becoming a bottleneck?

Typical signs include slow or inconsistent response times, sudden spikes in CPU or I/O during certain operations, queries that sometimes time out, deadlocks, blocking chains, recurring incidents during peak traffic, ETL jobs that push into business hours and developers warning that “we should not touch that part of the database.” If any of these are familiar, it is usually worth investigating.

4. How do you diagnose database performance issues?

We start by looking at how your system behaves under real conditions: monitoring dashboards, slow query logs, traces, incident histories and code paths. From there we drill into specific queries, schemas, indexes and patterns in your application. The diagnosis is a mix of instrumentation, analysis and conversations with your team about what actually hurts day to day.

5. Do you need access to our production environment to help?

We usually need read access to production metrics, logs and sometimes anonymised data or query plans. When direct access is not possible, we work with exported plans, captured traces and staging environments that mirror production closely. Security, privacy and compliance come first; we adapt to the policies you already have in place.

6. Will performance work cause downtime for our users?

The goal is always to minimise or avoid downtime. Many improvements – such as adding or adjusting indexes, query-level tuning or configuration changes – can be done with no or very low impact when planned carefully. For risky operations, we schedule maintenance windows, use rolling strategies or blue-green approaches and ensure rollback plans are in place.

7. Do you only optimise queries, or do you also change schemas and architecture?

We work at multiple levels. Sometimes a query or index change is enough. In other situations, a small schema adjustment, a better separation of workloads or the introduction of a replica or warehouse will make a bigger difference. We propose options with clear trade-offs so you can decide what level of change is appropriate for your current stage and risk appetite.

8. How do you decide between tuning the database and just upgrading hardware?

Hardware upgrades can be a reasonable short-term step, especially if you are clearly resource-constrained. But when hardware is repeatedly increased without addressing underlying inefficiencies, costs rise while performance still disappoints. We look at resource usage, query patterns and growth trends to decide whether tuning, hardware, or a mix of both will give you the best return at this moment in time.

9. Can you help us handle peak events like campaigns or seasonal spikes?

Yes. We analyse how your system behaves under peak loads, identify which operations create the most pressure and design strategies such as caching, replicas, prioritisation of workloads and scheduling of heavy jobs. The goal is to handle spikes without resorting to “all hands on deck” every time your traffic increases or you run a campaign.

10. How do you work with ORMs in the context of database performance?

ORMs are valuable tools, but they must be used consciously. We do not require you to abandon your ORM. Instead, we examine generated queries, address patterns like N+1 queries, overly broad selects and misused features such as lazy loading in hot paths. Where necessary, we introduce targeted raw SQL or stored procedures to handle critical operations more directly and predictably.

11. What is your approach to multi-tenant databases?

Multi-tenant setups come in many forms: shared databases with tenant keys, separate schemas per tenant, or separate databases entirely. We review the current approach, evaluate how it scales in terms of performance, isolation and operational complexity and suggest improvements – whether that is better indexing by tenant, partitioning or gradual separation for very large tenants that need special handling.

12. How do you decide which indexes to create or remove?

We focus on indexes that support actual workloads. That means analysing slow queries, high-frequency queries and important business operations. For each index, we weigh its benefits against its cost on writes and maintenance. At the same time we look for rarely used or overlapping indexes that can be simplified or removed to improve write performance and reduce maintenance overhead.

13. How often should we review our database performance?

Performance should not be a one-time project. As data, features and usage patterns change, what worked last year may not be ideal now. Many teams benefit from periodic reviews – for example, once or twice a year – plus more focused work when major feature launches or architectural changes are planned and before big growth milestones like new regions or large customers go live.

14. How do you handle the difference between transactional and analytical workloads?

We treat transactional (OLTP) and analytical (OLAP) workloads as distinct concerns, even if they currently live in the same database. Often that means designing data flows and possibly separate stores for analytics, so that heavy reporting and BI work do not slow down user-facing operations. The exact approach depends on your stack, tools and scale, but the principle is to stop them fighting over the same resources.

15. Can you help us design or improve our data warehouse and ETL processes?

Yes. khajavi.tech also focuses on BI and data platforms. We can help you design or refine your data warehouse schema, ETL or ELT pipelines and the way they interact with your production databases. The aim is a setup where analytics are trustworthy and timely without putting unnecessary load on live systems or forcing analysts to work around missing data.

16. Do you work with cloud-managed databases?

We regularly work with managed services offered by major cloud providers. While core database concepts remain the same, managed environments have their own configurations, limits and features. We help you understand what your provider offers, how to configure it sensibly and how to use it effectively instead of defaulting to the most expensive tier or relying on magic “auto” settings that hide problems rather than solving them.

17. How do you handle security and compliance during performance work?

Security and compliance come first. We follow the access boundaries you define, work with anonymised or masked data when appropriate and document what we need to see and why. If you have specific regulatory constraints, we adapt our approach so that performance improvements do not conflict with your obligations and audits can clearly see what changed and when.

18. What impact can database performance work have on our costs?

Performance work can reduce costs by making better use of existing resources, delaying or avoiding unnecessary hardware upgrades and identifying workloads that can be moved to cheaper, more appropriate environments. In some cases, we may recommend targeted increases in capacity where it clearly pays off in stability, user experience and reduced incident firefighting time for your team.

19. How do you make sure changes are safe before we ship them?

We encourage a combination of staged testing and careful rollout. That includes testing in environments that resemble production as closely as possible, capturing query plans and performance metrics and then rolling out changes gradually with monitoring. For high-risk operations, we plan explicit rollback paths in advance so the team knows exactly what to do if a change behaves differently than expected in the wild.

20. Can you help set up monitoring and alerting for database performance?

Yes. Observability is central to sustainable performance. We help identify the right metrics to track, set sensible thresholds for alerts and ensure that dashboards are understandable to the people who need them – not just database specialists. The result is a monitoring setup that supports day-to-day decisions instead of generating noise that everybody learns to ignore.

21. How do you balance quick wins with deeper architectural changes?

We usually start with quick wins that have clear impact and limited risk: fixing obvious query issues, adjusting problematic indexes, cleaning up heavy jobs. At the same time, we identify deeper structural problems and propose an incremental roadmap for addressing them. That way you see improvements early while still moving toward a more scalable architecture over the following months, not just days.

22. Can you help with concurrency issues like locks and deadlocks?

Yes. Locking and deadlocks are often symptoms of how work is scheduled and how transactions are scoped. We analyse patterns of access, isolation levels and specific sequences of operations that lead to contention. Solutions might involve reorganising work, splitting operations, adjusting isolation levels or modifying how and when long-running jobs run so they do not collide with peak user traffic anymore.

23. How do you handle long-running batch jobs that conflict with live traffic?

We first understand what those jobs do and why they need to run when they do. Then we explore options such as rescheduling, breaking jobs into smaller chunks, moving them to replicas or warehouses, or optimising the underlying queries and schema. The intention is to keep critical live traffic responsive while batch work still gets done reliably and within acceptable time windows for the business.

24. What is your approach to caching in relation to databases?

We design caching so that it clearly improves user experience without hiding deeper problems or creating stale-data issues. That means deciding what to cache, how long, where to store the cache and how to handle invalidation. We also look for places where “just cache it” has been used as a band-aid for inefficient queries that should instead be fixed or redesigned to be fast enough without relying on a fragile cache.

25. How do you scale write-heavy systems?

Write-heavy systems require careful attention to schema, indexing, transaction patterns and sometimes data distribution. We look at how writes are grouped, whether some can be buffered or batched, how often the same hot rows are touched and whether parts of the workload can be moved to specialised stores or partitioned tables. The specific strategy depends on your data model, consistency requirements and the types of operations your product needs to support at scale.

26. Will enforcing more constraints and validations slow down our database?

Constraints and validations cost something, but they also protect data quality and simplify reasoning for developers. We help you decide where constraints are essential and where they might be better handled in application code or data pipelines. Often, better schema design and indexing can offset the cost while retaining the benefits of strong data integrity and fewer confusing edge cases for both users and devs.

27. When does partitioning make sense for our database?

Partitioning starts to make sense when individual tables become very large and queries naturally target specific slices of data – for example, time ranges, regions or tenants. We look at your query patterns, maintenance routines and growth trajectory to decide whether partitioning would genuinely help or simply add complexity without clear benefit. When we do recommend it, we plan migration paths that keep risk and downtime under control.

28. Do you design sharding strategies as well?

When you reach a point where a single database or cluster can no longer handle your scale, sharding becomes an option. We help you evaluate sharding keys, trade-offs around cross-shard operations, operational complexity and migration paths. Because sharding is a big step, we recommend it only when other strategies are no longer sufficient and when your team is ready to operate a sharded environment responsibly.

29. How does database design change in a microservices architecture?

In microservices, databases often become more isolated: each service may own its own data and schema. That changes how transactions, queries and reporting work. We help you design patterns for data ownership, communication, reporting and consistency so that performance and scalability remain manageable even as the number of services grows and the system evolves over multiple years and teams.

30. Is switching to a different database engine always the answer?

Changing database engines is a major decision. In many cases, significant performance improvements can be achieved without switching, by tuning the existing system. When a switch is truly justified – for example, because your workload no longer fits the current engine – we help plan and execute migrations in a controlled way, including data moves, compatibility issues and cut-over strategies that protect your users.

31. Can you help us stabilise the system quickly while we plan long-term changes?

Yes. We often work in two layers: immediate stabilisation (reducing the frequency and severity of incidents) and longer-term design. Quick actions might include index tuning, re-scheduling heavy jobs or targeted query optimisation. In parallel, we map out the larger structural changes needed for durable scalability so you are not stuck in firefighting mode forever.

32. How do you think about data growth over multiple years?

We look at historical data growth, current models and product roadmap to estimate how large tables and workloads will become over several years. From there we plan archiving, partitioning, warehouse offloading or other strategies to keep performance sustainable. The aim is to avoid the situation where a table silently grows until it becomes a crisis at the worst possible time for the business.

33. Can you help if analytics queries are running directly on the production database?

Yes. This is common and often the source of unpredictable performance. We assess which analytical workloads can be moved to replicas, warehouses or dedicated analytical stores. We then design data flows so that BI has what it needs without impacting day-to-day product usage, and we make sure the new setup is observable so teams actually trust the data they are seeing there.

34. How long do typical Database Performance & Scalability engagements last?

Length depends on scope. A focused performance audit with targeted fixes might run over a few weeks. Larger engagements, including architectural changes or substantial data platform work, can span several months. Early in the process, khajavi.tech proposes a realistic timeline based on your priorities, team capacity and the current state of your system, then adjusts as we learn more together.

35. How do you collaborate with our internal team?

We work alongside your engineers, product managers, data people and operations teams. That can mean pairing on query tuning, reviewing code, explaining findings and decisions and helping shape internal practices around performance. The goal is not just to improve the system but to transfer understanding so your team grows stronger, not more dependent, over time.

36. What if our database and codebase are old and poorly documented?

Many systems we help are exactly in that state. We approach them with curiosity and care, mapping out what is actually in use, where risky assumptions exist and how to bring more structure gradually. Documentation often emerges as a by-product of the analysis, giving your team a clearer picture than they had before and reducing the fear of touching critical parts of the system after improvements are made.

37. Can you work within strict regulatory environments?

Yes. If you operate in regulated industries, we adapt our access patterns, documentation and change processes accordingly. That includes limiting access, using anonymised datasets where possible and aligning performance work with your existing governance and audit requirements so that improvements pass scrutiny without creating compliance surprises later on.

38. Do you work remotely with international teams?

khajavi.tech is comfortable working with fully remote, international teams across time zones. We use async updates, focused calls and clear written documentation so everyone stays aligned, regardless of location. This is particularly important for performance work, where context and history matter as much as specific query plans or metrics screenshots shared in a channel or document.

39. What kind of companies benefit most from your database performance services?

Products and organisations for whom data and uptime are central to value delivery benefit the most: SaaS platforms, marketplaces, analytics tools, internal operational systems and any web application where slow or unstable data layers directly affect customers and teams. If your data layer feels like a constraint, this work usually has a high return on attention and budget.

40. How can we start working with khajavi.tech on Database Performance & Scalability?

You can reach out via the contact options on khajavi.tech, or through WhatsApp or email. Share a short description of your product, current performance pain points and what you hope to achieve. From there, we can propose next steps, often starting with a focused performance and scalability assessment to understand where we can create the most value fastest.