Voltage Control of Active Distribution Network Using Reinforcement Learning

dc.contributor.authorMishra, Kapil
dc.date.accessioned2022-04-18T06:04:06Z
dc.date.available2022-04-18T06:04:06Z
dc.date.issued2022-03
dc.descriptionThe decreasing trend in the Levelized Cost of energy produced from renewable energy resources and their widespread potential has motivated towards the distributed generation (DG), mainly solar photovoltaic (PV).en_US
dc.description.abstractThe decreasing trend in the Levelized Cost of energy produced from renewable energy resources and their widespread potential has motivated towards the distributed generation (DG), mainly solar photovoltaic (PV). The DG output variation and consumer load variation may cause voltage violation in the distribution feeder. In this research, a model-free control algorithm based on reinforcement learning (RL) is presented for the regulation of distribution feeders by controlling the reactive power output from the smart inverters (SI) with minimum PV generated active power curtailment. The SI serves two purposes, generation of the power to supply load demand and reactive power generation for the voltage control. An RL, deep deterministic policy gradient algorithm is used to train an agent, from training the agent will learn a policy to control the output of smart inverters based on a designed reward function and later the trained agent was used to determine the setpoint of P, Q output from SI keeping the voltage within the desired limit. This algorithm is implemented in the IEEE-33 radial distribution feeder. A Load flow program has been developed to get the voltage magnitude information of each node, using Kirchoff's law, where the voltage angle and magnitude is the function of total power flowing through the node, branch reactance & branch impedance. Distributed generation is added in the IEEE-33 bus, and load flow analysis is performed. The analysis shows that without DG connection and at a low level of DG penetration, the line loss increased with the load and node voltage decreased and vice versa. However, with the high penetration of the DG, the line losses have increased during light load. The designed DDPG agent has successfully kept the node voltage within the limit under the varying load. Also, the active power curtailment of the model is compared with the volt-VAR droop model and results show that the active power curtailment by the model is less than the volt-VAR droop control method by 2.41 percent.en_US
dc.identifier.citationMASTER OF SCIENCE IN RENEWABLE ENERGY ENGINEERINGen_US
dc.identifier.urihttps://hdl.handle.net/20.500.14540/9899
dc.language.isoenen_US
dc.publisherPulchowk Campusen_US
dc.subjectDistributed Generation (DG),en_US
dc.subjectReinforcement Learning (RL)en_US
dc.titleVoltage Control of Active Distribution Network Using Reinforcement Learningen_US
dc.typeThesisen_US
local.academic.levelMastersen_US
local.affiliatedinstitute.titlePulchowk Campusen_US
local.institute.titleInstitute of Engineeringen_US
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
thesis report Kapil.pdf
Size:
2.47 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: