Struggling with devices dying sooner than expected? The advertised battery life seems like a promise, but the reality often falls short. You're left feeling confused by this performance gap.
Nominal capacity is the manufacturer's ideal, lab-tested rating, often labeled in milliamp-hours (mAh). Actual capacity is the real-world energy you get, which is impacted by usage intensity, temperature, and battery age. The difference is simply theory versus reality.

I talk to engineers and procurement officers every day. A frequent topic that comes up, especially with precise clients like Michael who develops medical devices, is the gap between the capacity written on a battery's datasheet and the performance they see in their products. They'll say, "Caroline, we bought a 3000mAh battery, but our tests only show 2700mAh. What's going on?" This confusion is common, but it's crucial to understand. The number on the box is just the beginning of the story. Let’s dive deeper into what’s really happening inside your battery and why that difference exists.
Why is a battery's actual capacity often lower than its nominal capacity?
You bought a high-capacity battery, but it just doesn't last as long as you hoped. It can feel misleading when the performance doesn't match the label.
A battery's actual capacity is often lower because real-world use is not the same as a lab test. High power draw, extreme temperatures, and energy lost during power conversion all reduce the effective capacity. The nominal value is measured under perfect, controlled conditions.

In my years of manufacturing batteries at Litop, I've learned that transparency is key. The nominal capacity is a benchmark, an "ideal score" achieved in a perfect environment. But your device doesn't operate in a perfect environment. Several factors come into play that reduce the energy you can actually use.
The Impact of Discharge Rate (C-Rate)
Think of a battery like your own lungs. If you jog at a slow, steady pace, you can breathe calmly and run for a long time. If you suddenly break into a full sprint, you'll be gasping for air and get tired much faster. Batteries work in a similar way. The "C-rate" measures how fast you discharge the battery relative to its capacity. Nominal capacity is almost always measured at a very low C-rate, like 0.2C. For a 5000mAh battery, that's a gentle 1000mA current. However, if your device needs a lot of power quickly—say, a 2C rate (10,000mA)—the battery experiences higher internal resistance and voltage drop. This inefficiency means it can't deliver its full stated capacity. More energy is lost as heat, so the usable capacity is lower.
Temperature's Critical Role
Temperature is another huge factor. Most batteries are rated at room temperature, around 25°C (77°F). But what if your product is a GPS tracker used in a Canadian winter? At -20°C (-4°F), the chemical reactions inside the battery slow down dramatically. The electrolyte becomes more viscous, like honey in the cold, making it harder for lithium ions to move. As a result, the actual capacity can drop to just 60%-80% of its nominal value. On the flip side, very high temperatures can temporarily increase performance but will permanently damage the battery and accelerate its aging.
The Hidden Cost of Power Conversion
This is a subtle point that many people miss, especially with power banks. A power bank might be labeled "20,000mAh," but that number refers to the capacity of the internal 3.7V battery cells. Your smartphone charges at 5V or higher. To deliver power, the power bank must use a converter circuit to boost the voltage from 3.7V to 5V. This conversion process is not 100% efficient; some energy is always lost as heat.
| Feature | Internal Battery Cell | Output to Your Phone |
|---|---|---|
| Voltage | 3.7V | 5V (or higher) |
| Stated Capacity | 20,000mAh | Not directly applicable |
| Effective Capacity | N/A | ~13,000mAh |
| Energy (Watt-hours) | 74Wh (20Ah * 3.7V) | ~65Wh (accounting for loss) |
Because of this voltage conversion and efficiency loss, a 20,000mAh power bank often only delivers around 11,000 to 13,000mAh of actual charge to your device. This isn't false advertising; it's physics.
What factors cause a battery's actual capacity to decrease over time (capacity fade)?
Remember when your new device lasted all day on a single charge? Now, a year later, you're constantly looking for an outlet by mid-afternoon. This gradual decline is frustrating.
Capacity fade is caused by irreversible chemical changes inside the battery with each charge and discharge cycle. Factors like high temperatures, storing the battery at 100% or 0% charge, and very fast charging1 all accelerate this natural aging process.

Every lithium-ion battery is a consumable component. From the moment it leaves our factory, it begins a very slow aging process. This is known as capacity fade, and it's a permanent reduction in the amount of charge the battery can store. While you can't stop it completely, you can certainly understand the factors that speed it up. For our clients who build long-lasting industrial and medical devices, managing this is a top priority.
Cycle Life and Chemical Degradation
Every time you charge and discharge a battery, you complete one "cycle." During this process, lithium ions move from the cathode to the anode and back again. But this process isn't perfect. Over hundreds of cycles, tiny, irreversible changes occur. A thin layer of residue, called the Solid Electrolyte Interphase (SEI)2, can slowly build up on the anode. This layer is necessary for the battery to function, but as it grows thicker with age, it traps lithium ions, taking them out of circulation permanently. Fewer available ions mean less charge the battery can hold. Think of it like a bucket that gets a thin layer of dried cement on the inside with every use—over time, the bucket just can't hold as much water.
The Damage from Environmental and Electrical Stress
How you use and store your battery has a huge impact on its lifespan. Two of the biggest enemies of a lithium-ion battery are heat and extreme states of charge.
- Heat: Exposing a battery to high temperatures (above 35°C or 95°F) dramatically accelerates the chemical reactions that cause degradation. I once worked with a client whose devices were failing prematurely in the field. We discovered they were being stored in service vans in Arizona, where interior temperatures reached over 60°C (140°F). The heat was literally cooking the batteries and cutting their lifespan in half.
- Extreme States of Charge: Leaving a battery at 100% charge for long periods is also stressful. It keeps the cathode in a highly oxidized, unstable state. Similarly, storing a battery completely empty (0%) can lead to other damaging side reactions. The sweet spot for long-term storage is around a 40-50% charge level at a cool temperature.
The Double-Edged Sword of Fast Charging
Fast charging is incredibly convenient, but it comes at a cost. Pushing a high current into the battery generates more heat and puts more physical stress on its internal structures. It's like filling a water balloon with a fire hose instead of a gentle tap—you can do it, but you risk damaging the balloon. While modern Battery Management Systems (BMS) are designed to manage this, frequent fast charging will generally cause more rapid capacity fade than slower, standard-rate charging. It's a trade-off between convenience today and longevity tomorrow.
How can I accurately measure or test the actual remaining capacity of my battery?
You suspect your battery isn't performing as it should be, but you can't be sure without hard data. For critical applications in fields like medical or IoT, guessing isn't good enough.
To accurately measure actual capacity, you need a specialized battery analyzer. This device performs a controlled, full charge-discharge cycle and precisely measures the total energy delivered in milliamp-hours (mAh). This gives you a true capacity reading, not just an estimate.

As someone who guarantees battery performance for a living, I can tell you that "feel" isn't a measurement. You need proper testing. For our B2B clients, this is a non-negotiable part of their quality assurance process. While it's harder for a casual user, there are several methods ranging from professional-grade to DIY estimates.
The Professional Method: Using a Battery Analyzer
This is the gold standard and what we use in our labs at Litop. A battery analyzer or a programmable electronic load is a sophisticated piece of equipment. The process is straightforward but precise:
- Full Charge: The analyzer first charges the battery to its maximum voltage (e.g., 4.2V for a standard Li-ion cell) using the manufacturer-specified charging algorithm.
- Rest Period: It lets the battery rest for a set period, like an hour, to allow the surface voltage to stabilize.
- Controlled Discharge: The analyzer then discharges the battery at a constant, specified current (e.g., 0.5C) until it reaches its designated cutoff voltage (e.g., 3.0V).
- Calculation: During the discharge, the machine precisely measures the total time it took and calculates the capacity using the formula: Capacity (Ah) = Discharge Current (A) × Time (h).
This method removes all variables and gives you a definitive, repeatable measurement of the battery's true capacity under specific conditions.
DIY Methods for a Rough Estimate
If you don't have access to an analyzer, you can still get a reasonable estimate.
- For USB Devices: You can buy an inexpensive "USB Doctor" or USB power meter. You plug it in between your power source and the device (or power bank) you're testing. To measure output capacity, start with a fully charged battery, reset the meter, and then use the battery to charge another device until it's empty. The meter will show you the total mAh (and Wh) that were delivered. It's not as precise as an analyzer due to varying charge rates and conversion losses, but it’s a great way to spot a battery that is severely underperforming.
Software-Based Estimates
Your smartphone, laptop, and many other modern devices have a built-in Battery Management System (BMS)3. This system estimates the battery's health and remaining capacity. It uses complex algorithms that track charge cycles, temperature, and voltage curves. However, this is still an estimate, not a direct measurement. Over time, these software-based readings can drift and become inaccurate. That's why some devices recommend a "recalibration cycle" (a full discharge followed by a full charge) to help the BMS re-learn the battery's actual endpoints.
How do battery manufacturers measure nominal capacity? What standards do they use?
You see "5000mAh" on a battery, but what does that number really mean? Without understanding the testing method behind it, the number is just marketing. This ambiguity can create real uncertainty.
Manufacturers measure nominal capacity by discharging a new battery at a very low, constant current (typically 0.2C) at room temperature (around 25°C) from a full charge down to its cutoff voltage. This is done according to international standards like IEC 619604.

When we at Litop put a capacity rating on our batteries, that number isn't arbitrary. It's the result of a highly standardized and repeatable process that is recognized across the industry. This ensures that when a client like Michael compares our 3000mAh battery to a competitor's, they are comparing apples to apples—at least in a lab setting.
The Standardized Testing Protocol
The process for determining nominal capacity is defined by international standards, primarily from the International Electrotechnical Commission (IEC). The key conditions are:
- Temperature: The test must be conducted in a controlled environment, typically 25°C ± 2°C (77°F). This eliminates temperature as a variable.
- Current: The battery is discharged at a low, constant current of 0.2C. For a 3000mAh battery, this is a gentle 600mA discharge current. This slow rate maximizes the energy that can be extracted and represents the best-case scenario.
- Voltage Range: The discharge runs from the battery's fully charged voltage down to its specified end-of-discharge voltage (cutoff voltage). This voltage is set by the manufacturer to prevent over-discharging, which can damage the cell.
- Rest Periods: The standards also require rest periods (often one hour or more) before and after charging/discharging. This allows the battery's internal chemistry to settle and gives a more stable voltage reading.
Rated vs. Typical Capacity
On our technical datasheets, you'll often see two different capacity values. It's important to know the difference.
| Capacity Type | Definition | Purpose |
|---|---|---|
| Typical Capacity | The average capacity measured from a large batch of newly produced cells. | Provides a realistic expectation of average performance. |
| Rated Capacity | The guaranteed minimum capacity that a cell will provide under standard conditions. | The official value used for safety certifications and quality assurance. |
For example, a cell might have a typical capacity of 3000mAh but a rated (minimum) capacity of 2900mAh. This means that while the average battery will be 3000mAh, we guarantee every single one will be at least 2900mAh. Reputable manufacturers are always transparent about both values and the conditions under which they were measured. If a supplier can't provide you with a detailed datasheet explaining this, it's a major red flag.
Conclusion
Understanding the difference between nominal and actual capacity is key. Nominal capacity is a standardized lab benchmark, while actual capacity is the dynamic result of your specific application. By knowing this, you can make smarter decisions for your projects and set realistic performance expectations.
Understanding the trade-offs of fast charging helps you balance convenience with long-term battery health. ↩
Understanding SEI formation explains long-term battery degradation and how to slow it down. ↩
Learning about BMS helps you interpret device battery readings and understand their limitations. ↩
IEC 61960 ensures fair, comparable battery ratings across manufacturers, helping you make informed purchases. ↩