Energy efficiency along the entire value chain
Within this blog post, we have summarized several individual blog posts on energy efficiency aspects, e.g. (i) generation efficiency, (ii) transmission efficiency, and (iii) application efficiency in providing a full picture of energy efficiency.
While primary energy factors (PEF) of energy resources within the fossil fuel categories (oil, gas, coal) and nuclear have remained static for the past few years, the PEF of renewable energies (solar, wind, hydro, biofuels) has increased at a rapid pace.
For one thing, fossil fuels have been optimized for more than a century and have reached very high factor values, proof of efficient resources-to-energy transformation. It is also true that these achievements are approaching a ceiling with only small room for improvement. By contrast, renewable energies, especially solar and wind, are only starting to advance in technological development. As a result, they are scoring ongoing higher efficiency factors, with the benefit of no or only minor outputs of CO2.
As a follow up to our post on reserves vs resources (where we outlined when fossil fuels will finally be used up), we investigate in this blog post whether the primary energy factors of renewable energies are already comparable and competitive to conventional sources. Therefore, we will have a look at primary energy input compared with the equivalent electricity output of the most significant energy sources there are.
For renewable energy sources, most of the reviewed studies present a conversion factor of 1. This 100% efficiency factor results in the source being inexhaustible and with all the energy generated from renewable sources consumed immediately. By contrast, fossil-fired power plants often are regulated and shut. To have a better overview, we consider technical efficiency factors for renewables.
The current default PEF in the EU is 2.5. This implies that each unit of electricity requires an input of 2.5 units of primary energy to be produced. Hence, all power generation (independent of source) in the EU is considered to be only 40% efficient. However, this value is outdated, having been established in 2006. Due to the ongoing energy transition and evolving technologies within the energy industry, this value today should be closer to 2.0 due largely to the renewable energies’ conversion factor set at 1 within this discussion.
The table below describes commercially available PEF of power plants in operation versus theoretical values currently in development:
|Source||Operation Factor||Theoretical Factor|
|Solar PV||6 – 40%||47.1% (achieved) limit at 90%|
|Hydro||80 – 90%||90% (limit achieved)|
|Wind||35 – 40%||59.6%|
|Gas||20 – 60%||80%|
|Nuclear||33 – 37%||45%|
The energy conversion factors outline huge potential for solar PV and seem to indicate a trade-off between baseload energy and high efficiency rates. While nuclear power is providing constant energy levels with nearly zero fluctuations, its efficiency upside is somewhat limited. Solar and wind energy, on the other hand, provide plenty of headroom for future technological potential while these energy sources aren’t capable of providing baseload electricity.
Older technologies like coal, hydro and gas are capable of providing constant power output with a surprisingly high theoretical efficiency factor for gas as an additional bonus.
Whilst primary energy generation sites are often located in remote areas, and far away from the actual consumption sites, energy must be transported. However, our current infrastructure, and, consequently the transmission and distribution of power, does not function at the highest possible efficiency. That is because not all the energy is utilized between production and consumption sites.
When looking at overall statistics, globally, only around 70 per cent of the total primary energy consumption actually reached the consumers. Twenty-two per cent of the losses can be accredited to the transmission via high-voltage lines and roughly 1.5 per cent to low-voltage distribution.
Numbers can vary for country, continent and year, depending on the energy mix, as different energy sources show different energy conversion efficiency rates. Efficiency losses associated with the transmission and distribution lines in developed countries such as the United States, Germany, or Singapore were in the low to mid-single digits, whereas developing countries saw numbers in excess of 50 per cent.
An estimated half of the carbon dioxide equivalents that are lost in transmission and distribution can be avoided through the deployment of more advanced technologies, the replacement of inefficient wires, and the upgrading of existing infrastructure. Further, the adoption of digital technologies such as blockchain, smart meters and AI can help establish power-flow routines that help establish a better configuration of power- and distribution lines.
The lost carbon dioxide equivalents accumulate worldwide to almost a billion metric tons of carbon dioxide; that is more than the entire chemicals sector emits per year, globally. Practically this means that we should also try to help emerging/developing countries to leap dirty fossil fuel options to generate electricity by providing top-notch technologies.
Assuming 100 per cent efficiency is unrealistic, as electric currents generate heat, that simply cannot be captured and utilized at every single second, as one example. Consequently, the longer the distance from generation site to consumption site, the more is lost. This is simply thermodynamics. Nevertheless, efficiency losses can be reduced to an extent, and therefore, the environmental impact of the energy sector reduced. To quantify the costs that occur from emitted CO2 that can directly be tied to the above described inefficiencies, we use CO2 prices, that we have derived in the EW-Factor.
Carbon prices, as was discussed in the post that first introduced the EW-Factor, are estimated based on the following:
A report by the Climate Leadership Council (CLC) examines the consequences of taxing carbon at $43 per ton starting in 2021, roughly corresponding to the council proposed in 2017. An increase of 5% above inflation annually would result in a CO2 tax of $112 by 2035. The German Government proposed an even more drastic carbon tax, which would rise to $180 until 2030. Nevertheless, we have chosen a more conservative approach, as we do not want to be overambitious and present the CO2 costs as being even more severe. This approach is also in line with the European Union Emissions Trading System (EU ETS).
The average price for emission certificates, measured in metric tons, in the years 2020 and 2021 are $40 and $44, respectively. Multiplying that with emission equivalencies of the lost energy along the entire lifecycle, we get approximately $38 billion and $41,8 billion, respectively. As already mentioned, it is not realistic to avoid all energy losses caused by transformation and distribution, but merely about half. Even then, the numbers are still in the low-mid twenty billion.
Just think of what can be created with more than $20 billion. Let’s look at solar power plants for example. The costs per watt of solar installation vary, depending on various factors such as costs of panels, costs of land, cost of capital, and so on. However, the costs usually lie somewhere around $1 per watt. Hence, a 1-megawatt solar farm would cost around $1 million.
The cumulative installed capacity of solar photovoltaics was 586.4 GW, globally in FY2019. The global weighted average LCOE for utility-scale solar projects was $60 per MWh in FY2019. By using the avoided costs of improving energy transformation efficiency and distribution alone, we could add around 20 GW per year, assuming costs of $1 per watt. By today, installation costs may even be as low as $0.80 or $0.90 per watt for utility-scale projects. In other words, more than 4 per cent of the to-date installed capacity of solar could be added each year if emissions due to grid inefficiencies are reduced by half.
Decentralized Energy Resources (DER), such as smart cities and more localized generation sites as seen with renewables, will substantially contribute to reducing grid efficiency losses by shortening the transmission distances. But it is not just the inefficiencies that our grid has to improve on. Still, the overall stability of the grid needs to be at the forefront of the energy transition as well if we want to achieve a higher degree of electrification and an increasing share of renewables.
As the idea behind electricity generation is to use the electricity again, we shall not forget about the inefficiencies that the end-users experience. Because end-users often are households, their energy use contributes to total emissions. Therefore, it is essential to reduce these emissions to a minimum to meet ambitious, yet necessary, carbon reduction targets.
In the US, for example, around 20% of emissions can be directly attributed to households. When looking at indirect emissions such as the energy intensity of products such as food, electronic gadgets, furniture, and so on, figures are closer to 80%. Estimates for the UK are not very promising either: indirect household emissions are estimated to be 74%.
Let’s start our look at end-use efficiency rates by looking at batteries. When charging a battery electric vehicle (BEV), the US Department of Energy says that around 16% of efficiency is lost in the process of transferring energy from the wall power source to the battery. On top of that, another 20-30% of efficiency is lost in the vehicles themselves due to the process of delivering power to the wheels. Even so around 17% can be recovered through regenerative braking, bringing the power-to-wheel efficiency ratio to roughly 80%. In contrast, a conventional car engine typically does not exceed 30% of power-to-road efficiency. (The complete picture of fuel engines vs BEV in terms of total efficiency is on another page.)
Batteries, however, are used not only in cars but in all sorts of devices. The problem of self-discharge is inevitable among batteries and is a well-known issue. This is the rate at which a battery loses a percentage of its stored energy while it is stationary and without a load. The loss is caused by chemical reactions that take place inside the battery even when there is no load applied to it. This process is very well described in the first law of thermodynamics: energy cannot be created neither destroyed nor continuously stored.
Next, let’s look at electric resistance heating, which is claimed by some sources to be 100% efficient, in the sense that the entire incoming supply of electricity is converted into heat. However, because most electricity is sourced from coal or gas, we need to consider their roughly 30% efficiency in terms of converting the fuel’s energy into electricity. On top of that, power is lost on the way from the generation site to the consumption site. As a result, claims of 100% efficiency need to be treated carefully!
Let’s do a quick calculation based on what we already learned (based on our current energy mix). 38% of the electricity comes from coal with a PEF of roughly 48% efficiency, 23% from natural gas at 40% efficiency, 16% from hydropower (85% efficiency), 10% from nuclear (35% efficiency) and the remainder (13%) from other renewables. If we weigh the efficiencies by share of electricity production, we have an approximate overall efficiency of 48%. This is a little above the outdated EU PEF of 40%; however, as explained within the document, the rising share of renewable energy in the total energy mix will also increase the PEF.
Minding the PEF of 48% and having to get the electricity to reach the consumers through transmission and distribution lines, at 70% efficiency, the PEF decreases to 33.6% efficiency. The reduction results solely from primary energy conversion to the secondary energy electricity and its transmission to households. Additionally, another decrease will result in the actual consumption and inefficiencies of the individual home appliances and electronics.
When we look at the average US household, an incredible 51% of energy consumption is attributed to air-conditioning and space-heating alone. Water heating, lighting, and refrigeration accounted for 27% of total annual home energy use in FY2015, and the remainding 21% are for consumer electronics, cooking appliances and washers of all kind. Around 40% of the energy consumption is electricity. There is a lack of data for absolute values of household efficiencies, but If we look at the efficiency trend, we can see that the overall household efficiency increased by 24% between the years of 1990 and 2009.
Moving on to in-home applications, the International Journal of Engineering & Research says that the average electric stove is roughly 49% efficient when calculated from measured input and utilized energies. This calls for enhancements to improve efficiency.
Many households today include washing machines and fridges, raising the question of whether their efficiency is at a high enough level. Fridges, for example, have seen rapid improvement in their efficiency, as modern ones use only about 25% of the energy that models from the 1990s did. Washing machines have seen similar improvements; high-efficiency washers (laundry appliances that meet specific criteria based on water, electricity, and detergent use) use load-sensing technology that helps them optimize water and electricity use to be eco-friendlier and more efficient.
Further improvements from these levels would benefit from agreement on a worldwide energy efficiency scale of such devices. Currently, a range of scales and input factors are used for such ratings. This ultimately causes a lack of transparency in efficiency markets.
In light of the Covid-19 pandemic, the use of computers, mobile phones, TVs and screens at many homes has seen a rapid increase. We have taken a closer look at energy efficiencies of these devices and were not too impressed.
LED televisions and screens are said to be among the most efficient, saving between 30% and 70% of the electricity consumption compared to plasma TVs. Even so, they are still heavy energy consumers, as they can draw more electricity than your fridge or freezer, depending on size, display brightness and, of course, usage.
When it comes to our closest companion, the smartphone, researchers from South Korea have found, that iPhones running on iOS are more energy-efficient than smartphones running on android OS. Our intention is not to encourage everyone to buy iPhones from now on but to be aware of the differences.
Charging your smartphone correctly and effectively also contributes to its overall efficiency. Phone chargers itself are only demonstrating efficiency rates between 63% and 80%, and that is before overcharging your phone or accounting for self-discharge. Studies show that charging your phone to only 80% is better and healthier for your battery than topping it up to 100%; the same holds true for laptops and tablets.
Many systems/operating software are using the 80% charging approach already by studying your charging habits, to extend battery life and keep efficiencies higher. Nonetheless, only with your active help can this be optimized further still. It’s not only good for your battery, but also for emissions goals and energy efficiency.
Pingback: The EU - A step ahead of everyone else? | electrifying.world
Pingback: Demand side management - It's more than switching off the lights at night!