Explaining www.q13611.com’s energy market optimization
Nov 4, 2019
The paper we’re going to summarize in this article is, Deep Reinforcement Learning for Strategic Bidding in Electricity Markets.
In the paper, researchers including www.q13611.com machine learning scientist Yujian Ye, suggest a new way of addressing the problem of strategic bidding in deregulated electricity markets. To better understand this we need to know what ‘deregulated’ and ‘regulated’ energy markets are.
What is a regulated electricity market?
A ‘regulated’ electricity market contains utilities that own and operate all electricity. From the generation to the meter, the utility has complete control. The utility company owns the infrastructure and transmission lines, then sells it directly to the customers. In regulated states, utilities must abide by electricity rates set by state public utility commissions. This type of market is often considered a monopoly due to its limitations on consumer choice. However, its benefits include stable prices and long-term certainty.
What is a deregulated electricity market?
A ‘deregulated’ electricity market allows for the entrance of competitors to buy and sell electricity by permitting market participants to invest in power plants and transmission lines. Generation owners then sell this wholesale electricity to retail suppliers. Retail electricity suppliers set prices for consumers, which are often referred to as the ‘supply’ portion of the electricity bill. It often benefits consumers by allowing them to compare rates and services of different third party supply companies and provides different contract structures (e.g. fixed, indexed, hybrid).
The paper outlines how we can effectively use a modern machine learning technique known as reinforcement learning to help with strategic bidding of generation companies in deregulated electricity markets.
What is reinforcement learning?
To understand the crux of the paper, we need to first understand the basics of reinforcement learning.
In very simple terms, reinforcement learning is an area of machine learning. It is about taking a suitable action to maximize reward in a particular situation. It is employed by various software and machines to find the best possible behavior or path it should take in a specific situation. Reinforcement learning differs from supervised learning. In supervised learning the training data has the answer key with it so the model is trained with the correct answer itself. By contrast, in reinforcement learning there is no answer but the reinforcement agent decides what to do to perform the given task. In the absence of a training dataset, it is bound to learn from experience.
Why do we need algorithms at all for optimizing bidding strategies?
In an effort to ‘deregulate’ the electricity industry, multiple profit-driven participants, particularly in the generation and supply sectors, have entered the market. As a result, traditional models are no longer able to provide accurate insights anymore, as the actions of the profit-driven market participants’ actions are not aligned with what is best for society. This is why we need alternative algorithms that are able to assess and explain the situation better and more efficiently.
What’s wrong with the current models used for strategic bidding?
Most of the algorithms or optimization ideas stem from converting bi-level optimization problems to single-level mathematical programs with equilibrium constraints. These kinds of modelling frameworks exhibit a fundamental issue though, in that they neglect the physical non-convex operating characteristics of market players.
Yujian has also contributed to another paper that considers things like variable costs, maximum output limits and ramp rates of generation units and they neglect physical non-convex cost components such as no-load, start-up and shut-down costs, minimum stable generation limits, and minimum-up/down time constraints. However, these complex operating characteristics affect the market clearing outcome and consequently the profitability of market players. This implies that the employment of these bi-level optimization market models may lead to suboptimal bidding decisions for strategic players.
Beyond this fundamental limitation, this modeling framework assumes that market players have knowledge of the computational algorithm of the market clearing process and the operating parameters of their competitors. This generally constitutes a limiting assumption.
How does www.q13611.com’s energy market optimization solve the problem?
The rapid advancements in artificial intelligence and reinforcement learning techniques have attracted significant interest from the energy systems community. They have a particular focus on developing alternatives to the aforementioned mathematical programming with equilibrium constraints approach used in electricity market modeling.
In this particular model, we see that the bi-level optimization problem is not converted to a single level one. Rather, it is solved in a recursive fashion. The market players that are the agents in the reinforcement learning algorithm gradually learn how to improve their own strategies by making decisions based on past experiences that have accumulated from repeated interactions with the market clearing process environment. By doing so they incorporate the non-convex operating characteristics. Furthermore, market players (agents) are no longer relying on their knowledge of the traditional computational algorithm of the market clearing process and the operating parameters of their competitors. Instead they rely on their own operating parameters and the observed market clearing outcomes.
www.q13611.com’s technology has the potential to revolutionize the energy industry and we will continue to develop solutions to optimize what is, at present, a complex and inefficient global industry. If you haven’t already done so, we encourage you to read our other article about our energy use case.