One node operation Step by step learning¶

This file contains a step by step learning of one node operation optimisation. If you're lost, you can go back to

  • one node model README it gives Planning tools and consumption models
  • two node models README for step by step learning with two nodes (France and Germany).
  • Mode complete 7 node model README for a more complete European model to do prospective analysis

Table of Contents¶

  • 1. Introduction
  • 2. First basic problem
    • 2.1. Math and first step with pyomo for solving the problem
    • 2.2. Variables
    • 2.3. Constraints
  • 3. Extensions of this operation problem
    • 3.1. Linear temporal coupling with ramp constraints
    • 3.2. Linear spatial coupling with spatial constraints - Problem Op3 Multi-Area -
  • 4. Storage operation
    • 4.1. Optimisation of a storage market participation
    • 4.2. Simultaneous optimisation of storage and electric system
  • 5. Complete French case

1. Introduction ¶

This document will gives a chance to understand

  • how to do a simulation of the hourly operation of an electric system for a year
  • understand the mathematical formulation of the optimisation problem
  • learn to analyse de results of the optimisation and in particular the Lagrange multipliers
  • get in touch with pyomo (a python package to write optimisation problems)

It proposes to enter the subject by increasing progressively the number of variables and constraints in the optimisation problem, hence moving toward more realism through the document, introducing:

  • ramp constraints that implies a simple temporal coupling
  • spatially indexed variables and congestion constraints that implies a simple spatial coupling.
  • storage constraints that implies a temporal coupling

It relies on different test cases that allow to

  • consider different production means (nuclear, thermal, solar, onshore wind power, offshore wind power, hydro, curtailement of consumption, storage)
  • consider different meteorological years for France
  • consider different countries in the multi-zone case (France, Germany, GB, Spain)

If, after reading this file, you want to build your own pyomo model you can create it in f_operationModels.py by adding a function that mimics the other ones. You can create you test case by adding a case_XXX.py and/or case_XXX.ipynb file in this folder.

Let us start by loading libraries

In [1]:
#region importation of modules
import os
import numpy as np
import pandas as pd
import csv

import datetime
import copy
import plotly.graph_objects as go
import matplotlib.pyplot as plt
from sklearn import linear_model
import sys


#endregion

#region Solver and data location definition
import os
import sys
if os.path.basename(os.getcwd())=='Operation_optimisation':
    sys.path.append('../../../')

from functions.f_graphicalTools import *
from functions.f_consumptionModels import *
## locally defined models
from Models.Basic_France_models.Operation_optimisation.f_operationModels import *

InputFolder='Data/input/'
GraphicalResultsFolder="GraphicalResults/"

if sys.platform != 'win32':
    myhost = os.uname()[1]
else : myhost = ""
if (myhost=="jupyter-sop"):
    ## for https://jupyter-sop.mines-paristech.fr/ users, you need to
    #  (1) run the following to loanch the license server
    if (os.system("/opt/mosek/9.2/tools/platform/linux64x86/bin/lmgrd -c /opt/mosek/9.2/tools/platform/linux64x86/bin/mosek.lic -l lmgrd.log")==0):
        os.system("/opt/mosek/9.2/tools/platform/linux64x86/bin/lmutil lmstat -c 27007@127.0.0.1 -a")
    #  (2) definition of license
    os.environ["MOSEKLM_LICENSE_FILE"] = '@jupyter-sop'

BaseSolverPath='/Users/robin.girard/Documents/Code/Packages/solvers/ampl_macosx64' ### change this to the folder with knitro ampl ...
## in order to obtain more solver see see https://ampl.com/products/solvers/open-source/
## for eduction this site provides also several professional solvers, that are more efficient than e.g. cbc
sys.path.append(BaseSolverPath)
solvers= ['gurobi','knitro','cbc'] # try 'glpk', 'cplex'
solverpath= {}
for solver in solvers : solverpath[solver]=BaseSolverPath+'/'+solver
solver= 'mosek' ## no need for solverpath with mosek.
#endregion

InputConsumptionFolder='../Consumption/Data/'
InputProductionFolder='../Production/Data/'
InputOperationFolder='Data/'

2. First basic problem ¶

In this section we propose a simple version of the optimisation problem with a mathematical description. This allows us to discuss the use of pyomo (a python package to write optimisation problems).

2.1. Math and first step with pyomo for solving problem¶

Mathematical formulation as a linear programming problem

\begin{align} &\text{Cost function }& &\min_{x} \sum_t \sum_i \pi_i x_{it}\;\;\; & & \pi_i \text{ marginal cost}\\ &\text{Power limit } & &\text{ s.t.} \;\; 0 \leq x_{it}\leq a_{it} \bar{x_i} & &\bar{x_i} \text{ installed power, } a_{it} \text{ availability}\\ &\text{Meet demand } & & \sum_i x_{it} \geq C_t && C_t \text{ Consumption}\\ &\text{Stock limit } & &\sum_t x_{it}\leq E_i && E_i=\bar{x_i}*N_i \text{ Energy capacity limit}\\ \end{align}

This is linear programing and could be transformed into something like

$ \begin{align} & \min_y & c^Ty \\ & Ay\leq b \end{align} $

with a well chosen parameter matrix $A$, parameter vector $b$ and $c$ and variable vector $y$. While increasing complexity of the problem, this can become very painful.

With the Python package Pyomo it is almost sufficient to write the mathematical equations. Pyomo is then charged of building the matrix form, and you just have to think about problem formulation. Pyomo is similar to other tools such as GAMS, AMPL, ...

Principles of a pyomo model To build a model, Pyomo needs three different kinds of data : sets, parameters and variables.

  • Sets are dimensions, here the time and the name of technology plus a mix of these two : Date, TECHNOLOGIES and Date_TECHNOLOGIES (product set).

  • Parameters are tables indexed by set whose values are specified by the user. Here is the list of the parameters we use in the simplest cases : energycost, EnergyNbhourCap, capacity, availability factor, area consumption.

  • Variable are tables indexed by set whose values are found by the solver : the energy produced by each mean of production.

Now if you wish to learn more about pyomo and to see how the optimisation problem is writen in pyomo language, take look at function GetElectricSystemModel_GestionSingleNode (by doing a control-click on it in the code that follos a bit later, otherwise it is in f_operationModels.py). We will use this function in the following exemple.

First simulation -- description of case_step_by_step_learning

If you want to give it a try, use the case_step_by_step_learning.py or case_step_by_step_learning.ipynb. There, we start with a simple test case, with only two production means (nuke and thermal) but you can run this with more than that, and change the installed power. If you want to change assumptions you will need to change the csv files that are loaded below.

In [2]:
#region I - Simple single area : loading parameters
year=2013;
#### reading areaConsumption availabilityFactor and TechParameters CSV files
areaConsumption = pd.read_csv(InputConsumptionFolder+'areaConsumption'+str(year)+'_FR.csv',sep=',',decimal='.',skiprows=0,parse_dates=['Date']).set_index(["Date"])
availabilityFactor = pd.read_csv(InputProductionFolder+'availabilityFactor'+str(year)+'_FR.csv',sep=',',decimal='.',skiprows=0,parse_dates=['Date']).set_index(["Date","TECHNOLOGIES"])
TechParameters = pd.read_csv(InputOperationFolder+'Gestion-Simple_TECHNOLOGIES.csv',sep=',',decimal='.',skiprows=0).set_index(["TECHNOLOGIES"])

#### Selection of subset
Selected_TECHNOLOGIES=['OldNuke','CCG'] #you can add technologies here
availabilityFactor=availabilityFactor.loc[(slice(None),Selected_TECHNOLOGIES),:]
TechParameters=TechParameters.loc[Selected_TECHNOLOGIES,:]
TechParameters.loc["CCG",'capacity']=100000 ## margin to make everything work
#TechParameters.loc["WindOnShore",'capacity']=117000
#TechParameters.loc["Solar",'capacity']=67000
#endregion
/var/folders/zt/7cz1y9md79l_h08bbqymt4w9z8xlw7/T/ipykernel_9716/2584531655.py:10: FutureWarning: The behavior of indexing on a MultiIndex with a nested sequence of labels is deprecated and will change in a future version. `series.loc[label, sequence]` will raise if any members of 'sequence' or not present in the index's second level. To retain the old behavior, use `series.index.isin(sequence, level=1)`
  availabilityFactor=availabilityFactor.loc[(slice(None),Selected_TECHNOLOGIES),:]

First simulation -- running the optimisation

Now we run the optimisation script and load optimised variables.

At this stage, or later, you might be willing to to see the pyomo model. If it is the case, dig into function GetElectricSystemModel_GestionSingleNode (by doing a control-click on it, otherwise it is in f_operationModels.py)

In [3]:
#region I - Simple single area  : Solving and loading results
model = GetElectricSystemModel_GestionSingleNode(Parameters = {"areaConsumption" : areaConsumption,
                                                               "availabilityFactor" : availabilityFactor,
                                                               "TechParameters" : TechParameters})
if solver in solverpath :  opt = SolverFactory(solver,executable=solverpath[solver])
else : opt = SolverFactory(solver)
results=opt.solve(model)
## result analysis
Variables=getVariables_panda_indexed(model)


extractCosts(Variables)
#extractEnergyCapacity(Variables)
#pour avoir la production en KWh de chaque moyen de prod chaque heure
### Check sum Prod = Consumption
production_df=Variables['energy'].pivot(index="Date",columns='TECHNOLOGIES', values='energy')
Delta=(production_df.sum(axis=1) - areaConsumption.areaConsumption);
#endregion

We have analysed the result of the optimisation first with the dictionary variables that contains all optimized variables. The most important variable here is 'energy'. The other one 'energyCosts' can be computed from 'energy' and it is somehow redundant.

2.2 Analysing results : lagrange multipliers ¶

Lagrange multiplier might be more difficult to understand for those who still lack a good optimisation course (the one at second semester of first year of MINES ParisTech is perfect). In cases you want to dig this, you can have a look at Boyd's course, e.g. starting with lecture 8 or before. The most important message here is that lagrange multipliers associated to the demand constraint (here called 'energyCtr') are meant to mimic market prices. Lagrange multiplier associated to this constraint at time t is the marginal cost that one would pay to increase $C_t$ by a small amount. They can be used to simulate market prices.

In [4]:
#region I - Simple single area  : visualisation and lagrange multipliers
### representation des résultats
fig=MyStackedPlotly(y_df=production_df,Conso = areaConsumption)
fig=fig.update_layout(title_text="Production électrique (en KWh)", xaxis_title="heures de l'année")
#plotly.offline.plot(fig, filename='file.html') ## offline
fig.show()
print(production_df)
#### lagrange multipliers
Constraints= getConstraintsDual_panda(model)

# Analyse energyCtr
energyCtrDual=Constraints['energyCtr']; energyCtrDual['energyCtr']=energyCtrDual['energyCtr']
energyCtrDual
round(energyCtrDual.energyCtr,2).unique()

# Analyse CapacityCtr
CapacityCtrDual=Constraints['CapacityCtr'].pivot(index="Date",columns='TECHNOLOGIES', values='CapacityCtr');
round(CapacityCtrDual,2)
round(CapacityCtrDual.OldNuke,2).unique() ## if you increase by Delta the installed capacity of nuke you decrease by xxx the cost when nuke is not sufficient
round(CapacityCtrDual.CCG,2).unique() ## increasing the capacity of CCG as no effect on prices
#endregion
TECHNOLOGIES              OldNuke          CCG
Date                                          
2013-01-01 00:00:00  57282.151899   537.848101
2013-01-01 01:00:00  57183.000000    -0.000000
2013-01-01 02:00:00  54320.000000    -0.000000
2013-01-01 03:00:00  50742.000000    -0.000000
2013-01-01 04:00:00  48982.000000    -0.000000
...                           ...          ...
2013-12-31 19:00:00  54959.649921  7052.350079
2013-12-31 20:00:00  54882.270570  5093.729430
2013-12-31 21:00:00  54804.891218  9475.108782
2013-12-31 22:00:00  54727.511867  9974.488133
2013-12-31 23:00:00  54650.132516  6773.867484

[8760 rows x 2 columns]
Out[4]:
array([-0.])

We have the Lagrange multipliers associated to each (active) constraint (zero means unactive constraint) in dictionary 'Constraints'. We have three different constraints : the energy constraint, the capacity constraint and the storage constraint (unactive here) Try understanding the meaning of lagrange multipliers, here and in the next Sections.

3. Extensions of this operation problem ¶

3.1. Linear temporal coupling with ramp constraints ¶

In the this section, we will increase the complexity of the problem given in Section 2 and add : dependency on area z (country), a congestion constraint, ramp constraints.

\begin{align} &\text{Cost function }& &\min_{x} \sum_z \sum_t \sum_i \pi_{iz} x_{itz}\;\;\; & & \pi_{iz} \text{ marginal cost}\\ &\text{Power limit } & &\text{ s.t.} \;\; 0 \leq x_{itz}\leq a_{itz} \bar{x_{iz}} & &\bar{x_{iz}} \text{ installed power, } a_{itz} \text{ availability}\\ &\text{Meet demand } & & \sum_i x_{itz} \geq C_{tz} && C_{tz} \text{ Consumption}\\ &\text{Stock limit } & &\sum_t x_{it}\leq E_i && E_i=\bar{x_i}*N_i \text{ Energy capacity limit}\\ &\text{ramp limit } & &rc^-_i *x_{it}\leq x_{it}-x_{i(t+1)}\leq rc^+_i *x_{it} && rc^+_i rc^-_i\text{ ramp limit}\\ \end{align}
In [5]:
#region II - Ramp Ctrs Single area : loading parameters loading parameterscase with ramp constraints
year=2013
Selected_TECHNOLOGIES=['OldNuke', 'CCG',"curtailment"] #you'll add 'Solar' after

#### reading areaConsumption availabilityFactor and TechParameters CSV files
areaConsumption = pd.read_csv(InputConsumptionFolder+'areaConsumption'+str(year)+'_FR.csv',sep=',',decimal='.',skiprows=0,parse_dates=['Date']).set_index(["Date"])
availabilityFactor = pd.read_csv(InputProductionFolder+'availabilityFactor'+str(year)+'_FR.csv',sep=',',decimal='.',skiprows=0,parse_dates=['Date']).set_index(["Date","TECHNOLOGIES"])
TechParameters = pd.read_csv(InputOperationFolder+'Gestion-RAMP1_TECHNOLOGIES.csv',sep=',',decimal='.',skiprows=0).set_index(["TECHNOLOGIES"])

#### Selection of subset
availabilityFactor=availabilityFactor.loc[(slice(None),Selected_TECHNOLOGIES),:]
TechParameters=TechParameters.loc[Selected_TECHNOLOGIES,:]

TechParameters.loc["CCG",'capacity']=100000 ## margin to make everything work
TechParameters.loc["OldNuke",'RampConstraintMoins']=0.01 ## a bit strong to put in light the effect
TechParameters.loc["OldNuke",'RampConstraintPlus']=0.02 ## a bit strong to put in light the effect
#endregion
/var/folders/zt/7cz1y9md79l_h08bbqymt4w9z8xlw7/T/ipykernel_9716/502529344.py:11: FutureWarning:

The behavior of indexing on a MultiIndex with a nested sequence of labels is deprecated and will change in a future version. `series.loc[label, sequence]` will raise if any members of 'sequence' or not present in the index's second level. To retain the old behavior, use `series.index.isin(sequence, level=1)`

In [6]:
#region II - Ramp Ctrs Single area : solving and loading results
model = GetElectricSystemModel_GestionSingleNode(Parameters = {"areaConsumption" : areaConsumption,
                                                               "availabilityFactor" : availabilityFactor,
                                                               "TechParameters" : TechParameters})
opt = SolverFactory(solver)
results=opt.solve(model)
Variables=getVariables_panda_indexed(model)


#pour avoir la production en KWh de chaque moyen de prod chaque heure
production_df=Variables['energy'].pivot(index="Date",columns='TECHNOLOGIES', values='energy')
### Check sum Prod = Consumption
Delta=(production_df.sum(axis=1) - areaConsumption.areaConsumption);
abs(Delta).max()
print(production_df.loc[:,'OldNuke'].diff(1).max()/TechParameters.loc["OldNuke","capacity"])
print(production_df.loc[:,'OldNuke'].diff(1).min()/TechParameters.loc["OldNuke","capacity"])



print(production_df.sum(axis=0)/10**6) ### energies produites TWh
print(Variables['energyCosts']) #pour avoir le coût de chaque moyen de prod à l'année
#endregion
0.018828533755274306
-0.05131615430982562
TECHNOLOGIES
CCG             83.462521
OldNuke        408.497508
curtailment      0.054591
dtype: float64
  TECHNOLOGIES   energyCosts
0  curtailment  1.637743e+08
1      OldNuke  2.859483e+09
2          CCG  5.742221e+09
In [7]:
#region II - Ramp Ctrs Single area : visualisation and lagrange multipliers


### representation des résultats
Date_d=pd.date_range(start=str(year)+"-01-01 00:00:00",end=str(year)+"-12-31 23:00:00",   freq="1H")
production_df.index=Date_d; areaConsumption.index=Date_d;
fig=MyStackedPlotly(y_df=production_df,Conso = areaConsumption)
fig=fig.update_layout(title_text="Production électrique (en KWh)", xaxis_title="heures de l'année")
#plotly.offline.plot(fig, filename='file.html') ## offline
fig.show()

#### lagrange multipliers
Constraints= getConstraintsDual_panda(model)

# Analyse energyCtr
energyCtrDual=Constraints['energyCtr']; energyCtrDual['energyCtr']=energyCtrDual['energyCtr']
energyCtrDual
round(energyCtrDual.energyCtr,2).unique()

# Analyse CapacityCtr
CapacityCtrDual=Constraints['CapacityCtr'].pivot(index="Date",columns='TECHNOLOGIES', values='CapacityCtr');
round(CapacityCtrDual,2)
round(CapacityCtrDual.OldNuke,2).unique() ## if you increase by Delta the installed capacity of nuke you decrease by xxx the cost when nuke is not sufficient
round(CapacityCtrDual.CCG,2).unique() ## increasing the capacity of CCG as no effect on prices
#endregion
Out[7]:
array([-0.])

Again you can have to look at lagrange multipliers. Try adding renewable production.

Try modifying the ramp constraint to generate negative lagrange multipliers.## 4. Storage operation

4.1. Optimisation of a storage market participation ¶

Just have a look at optim-Storage.ipynb

4.2. Simultaneous optimisation of storage and electric system ¶

The optimisation problème is the same as before. Is this section, we only add the contraints regarding the storage :

\begin{align} &\text{Cost function }& &\min_{x} \sum_t \sum_i \pi_i x_{it}\;\;\; & & \pi_i \text{ marginal cost}\\ &\text{Power limit } & &\text{ s.t.} \;\; 0 \leq x_{it}\leq a_{it} \bar{x_i} & &\bar{x_i} \text{ installed power, } a_{it} \text{ availability}\\ &\text{Meet demand } & & \sum_i x_{it}+\sum_j (z_{out_{jt}}-z_{in_{jt}}) \geq C_t && C_t \text{ Consumption}, z_{jt} \text{ storage}\\ &\text{Stock limit } & &\sum_t x_{it}\leq E_i && E_i=\bar{x_i}*N_i \text{ Energy capacity limit}\\ \end{align}

Adding several storages to this operation problem is done here : \begin{align} &\text{storage power constraint}& &0\leq z_{in/out_{jt}}\leq p_{\max} \\ &\text{storage energy constraint}& &0\leq \sum_{k=1}^t \left( z_{in_{jk}}.\eta_{in_j}-\frac{z_{out_{jk}}}{\eta_{out_j}} \right)\leq c_{\max} && \eta_j \text{ Storage efficiency (in/out)} \\ \end{align}

By introducing a new variable representing the storage level at any time t, the last equation become : \begin{align} \text{storage energy constraint}& &0\leq s_{jt} \leq c_{\max} \\ \text{where}& & s_{jt} = s_{j,t-1}.(1-\delta_j) + z_{in_{jk}}.\eta_{in_j}-\frac{z_{out_{jk}}}{\eta_{out_j}} && \delta_j \text{ Storage dissipation}\\ \end{align}

This model is implemented in function GetElectricSystemModel_GestionSingleNode_with1Storage in file f_operationModels.py. The idea of this modelis quite general and a version exists also in the multi-node case and for Planning problems (in file Planning_optimisation/f_PlanningModels.py)

In [8]:
#region III Ramp+Storage single area : loading parameters
year=2013
Selected_TECHNOLOGIES=['OldNuke','WindOnShore', 'CCG',"curtailment"]

#### reading CSV files
areaConsumption = pd.read_csv(InputConsumptionFolder+'areaConsumption'+str(year)+'_FR.csv',sep=',',decimal='.',skiprows=0,parse_dates=['Date']).set_index(["Date"])
availabilityFactor = pd.read_csv(InputProductionFolder+'availabilityFactor'+str(year)+'_FR.csv',sep=',',decimal='.',skiprows=0,parse_dates=['Date']).set_index(["Date","TECHNOLOGIES"])
TechParameters = pd.read_csv(InputOperationFolder+'Gestion-RAMP1_TECHNOLOGIES.csv',sep=',',decimal='.',skiprows=0).set_index(["TECHNOLOGIES"])
StorageParameters = pd.read_csv(InputOperationFolder+'Gestion-RAMP1_STOCK_TECHNO.csv',sep=',',decimal='.',skiprows=0).set_index(["STOCK_TECHNO"])

#### Selection of subset
availabilityFactor=availabilityFactor.loc[(slice(None),Selected_TECHNOLOGIES),:]
TechParameters=TechParameters.loc[Selected_TECHNOLOGIES,:]
TechParameters.loc["CCG",'capacity']=100000 ## margin to make everything work
TechParameters.loc["OldNuke",'RampConstraintMoins']=0.02 ## a bit strong to put in light the effect
TechParameters.loc["OldNuke",'RampConstraintPlus']=0.02 ## a bit strong to put in light the effect
#endregion
/var/folders/zt/7cz1y9md79l_h08bbqymt4w9z8xlw7/T/ipykernel_9716/1133397161.py:12: FutureWarning:

The behavior of indexing on a MultiIndex with a nested sequence of labels is deprecated and will change in a future version. `series.loc[label, sequence]` will raise if any members of 'sequence' or not present in the index's second level. To retain the old behavior, use `series.index.isin(sequence, level=1)`

In [9]:
#region III Ramp+Storage single area : solving and loading results
model= GetElectricSystemModel_GestionSingleNode_withStorage(Parameters = {"areaConsumption" : areaConsumption,
                                                               "availabilityFactor" : availabilityFactor,
                                                               "TechParameters" : TechParameters,
                                                               "StorageParameters" : StorageParameters})

if solver in solverpath :  opt = SolverFactory(solver,executable=solverpath[solver])
else : opt = SolverFactory(solver)
results=opt.solve(model)
Variables = getVariables_panda_indexed(model)
Constraints = getConstraintsDual_panda(model)

production_df=Variables['energy'].pivot(index="Date",columns='TECHNOLOGIES', values='energy')
production_df.loc[:,'Storage'] = Variables['storageOut'].pivot(index='Date',columns='STOCK_TECHNO',values='storageOut').sum(axis=1)-Variables['storageIn'].pivot(index='Date',columns='STOCK_TECHNO',values='storageIn').sum(axis=1) ### put storage in the production time series
production_df.sum(axis=0)/10**6 ### energies produites TWh
production_df[production_df>0].sum(axis=0)/10**6 ### energies produites TWh
production_df.max(axis=0)/1000 ### Pmax en GW

Date_d=pd.date_range(start=str(year)+"-01-01 00:00:00",end=str(year)+"-12-31 23:00:00",   freq="1H")
production_df.index=Date_d; areaConsumption.index=Date_d;
fig=MyStackedPlotly(y_df=production_df, Conso=areaConsumption)
fig=fig.update_layout(title_text="Production électrique (en KWh)", xaxis_title="heures de l'année")
#plotly.offline.plot(fig, filename=GraphicalResultsFolder+'file.html') ## offline
fig.show()


#endregion

If you want to contribute here, try improving the visualisation tools (with respect to storage, multizone, integration of interconnexions ...)

5. Complete French case ¶

If you want to contribute here :

  • try improving the visualisation tools (with respect to storage, multizone, integration of interconnexions ...)
  • propose new cases with new dataset (new years, new countries, new set of countries, several nodes for a country, new technos ...)
In [10]:
#region "simple" France loading parameters selecting technologies
Zones="FR"
year=2013

Selected_TECHNOLOGIES=['OldNuke','Coal','CCG','TAC', 'WindOnShore','HydroReservoir','HydroRiver','Solar','curtailment']

#### reading CSV files
areaConsumption = pd.read_csv(InputConsumptionFolder+'areaConsumption'+str(year)+'_FR.csv',
                                sep=',',decimal='.',skiprows=0,parse_dates=['Date']).set_index(["Date"])
availabilityFactor = pd.read_csv(InputProductionFolder+'availabilityFactor'+str(year)+'_FR.csv',
                                sep=',',decimal='.',skiprows=0,parse_dates=['Date']).set_index(["Date","TECHNOLOGIES"])
TechParameters = pd.read_csv(InputOperationFolder+'Gestion-Simple_TECHNOLOGIES.csv',
                             sep=',',decimal='.',skiprows=0).set_index(["TECHNOLOGIES"])
StorageParameters = pd.read_csv(InputOperationFolder+'Gestion-RAMP1_STOCK_TECHNO.csv',
                                sep=',',decimal='.',skiprows=0).set_index(["STOCK_TECHNO"])

#### Selection of subset
availabilityFactor=availabilityFactor.loc[(slice(None),Selected_TECHNOLOGIES),:]
TechParameters=TechParameters.loc[Selected_TECHNOLOGIES,:]
#TechParameters.loc[TechParameters.TECHNOLOGIES=="CCG",'capacity']=15000 ## margin to make everything work
#endregion
/var/folders/zt/7cz1y9md79l_h08bbqymt4w9z8xlw7/T/ipykernel_9716/1933787256.py:18: FutureWarning:

The behavior of indexing on a MultiIndex with a nested sequence of labels is deprecated and will change in a future version. `series.loc[label, sequence]` will raise if any members of 'sequence' or not present in the index's second level. To retain the old behavior, use `series.index.isin(sequence, level=1)`

In [11]:
#region "simple" France : solving and loading results
model= GetElectricSystemModel_GestionSingleNode_withStorage(Parameters = {"areaConsumption" : areaConsumption,
                                                               "availabilityFactor" : availabilityFactor,
                                                               "TechParameters" : TechParameters,
                                                               "StorageParameters" : StorageParameters})
if solver in solverpath :  opt = SolverFactory(solver,executable=solverpath[solver])
else : opt = SolverFactory(solver)
results=opt.solve(model)
Variables = getVariables_panda_indexed(model)
Constraints = getConstraintsDual_panda(model)


production_df=Variables['energy'].pivot(index="Date",columns='TECHNOLOGIES', values='energy')
production_df.loc[:,'Storage'] = Variables['storageOut'].pivot(index='Date',columns='STOCK_TECHNO',values='storageOut').sum(axis=1)-Variables['storageIn'].pivot(index='Date',columns='STOCK_TECHNO',values='storageIn').sum(axis=1) ### put storage in the production time series
Delta= production_df.sum(axis=1)-areaConsumption["areaConsumption"]
sum(abs(Delta))
production_df.sum(axis=0)/10**6 ### energies produites TWh
production_df[production_df>0].sum(axis=0)/10**6 ### energies produites TWh
production_df.max(axis=0)/1000 ### Pmax en GW

Date_d=pd.date_range(start=str(year)+"-01-01 00:00:00",end=str(year)+"-12-31 23:00:00",   freq="1H")
production_df.index=Date_d; areaConsumption.index=Date_d;
fig=MyStackedPlotly(y_df=production_df, Conso=areaConsumption)
fig=fig.update_layout(title_text="Production électrique (en KWh)", xaxis_title="heures de l'année")
#plotly.offline.plot(fig, filename='file.html') ## offline
fig.show()
#endregion