A large part of PSC’s Market Systems and Energy Applications service offering is building bespoke software systems for electricity markets based on market rules with immovable start dates. This blog post covers some of our learnings over the past 25+ years working in the heavily regulated area of Electricity Markets and provides a framework proven in our software delivery domain.
The systems that run electricity markets are complex. Depending on the system, high availability may be required. There are a lot of systems that interact with one another, and no two systems are the same as they are governed by market rules that are different in every jurisdiction. Changes to market rules are frequent, particularly in the last 10 years, with a move from a centralized market to a more distributed market brought on by responses to climate change diversifying the fuel mix within our electricity grids. Market rule changes are bound to market rule commencement dates that cannot be missed for both compliance and reputational reasons. Market designs are also different. Day-ahead/balancing markets vs. real-time markets, energy-only vs. energy + reserve markets, nodal vs. zonal, and the list goes on. Electricity markets are not part of a shrink-wrapped software world.
It is important to understand the problem domain of electricity market systems when creating a software delivery framework. Market systems have six key considerations to contemplate when applying a software delivery framework.
PSC has adapted and evolved our framework for delivering bespoke market systems over the last 25 years. Software is forever evolving, and we must adapt to this evolution. Part of this evolution is improving the way we deliver software to our end clients. We have introduced agile methods to our framework and have also retained some aspects of a more traditional waterfall approach due to the unique nature of our problem domain as described above.
This can be a controversial approach, particularly with those that are agile advocates. Some see it as a half measure to still invoke waterfall methods, that this somehow takes away from the benefits of the agile paradigm. Hopefully, this explanation will reverse some of that thinking, or at the very least, encourage some healthy debate.
We begin our projects in a waterfall fashion. We utilize the Project Management Book of Knowledge (PMBok) as our project governance framework. We develop the following deliverables:
These are all fairly standard project management and software documentation deliverables and require sign-off from the business before proceeding.
There is some flexibility on the Project Scope Statement as this does not need to be completely defined upfront. Just enough to get started is acceptable if the schedule is tight and resources are ready to go. This ticks the sufficient project governance and the market rules framework considerations boxes.
We’ve done some upfront thinking, we’ve planned who is doing what, we know where we’re developing and testing, we’ve identified initial risks and issues, we’ve got a high-level design, and we’ve run a kick-off meeting with project stakeholders. Now we switch to an agile paradigm. PSC does not prescribe to any one framework but predominately uses elements of Scrum and XP. The usual agile goodness that encourages collaboration includes the business in the project team (the Product Owner) to assist in prioritizing the backlog and sprint planning, use of sprint ceremonies (planning, retrospectives, showcases, stand-ups), and developing software iteratively and incrementally.
We don’t insist on user acceptance being delivered in the same sprint as the build, as we understand the business can have capacity issues with their “business as usual” assignments. Instead, we work with the business to schedule their time as close as possible to our sprint delivery. This ticks the end-user availability framework considerations box.
We use Development and Operations (DevOps) elements to deliver our software, ticking the automation framework considerations box. We develop automated tests to build quality into our systems and have gone as far as introducing automated certification tests that are certified by external auditors to reduce audit costs.
We don’t forget about risk and issue management, keeping an updated Risk and Issue Register to discuss mitigation strategies within the project team and beyond if required.
We inform our stakeholders of our progress using traffic light indicators on cost, schedule and scope. We utilize burn-down charts and earned value reporting to back up these traffic light indicators and align these with our sprints. We publish any new risks or issues encountered within the sprint timeframe.
We sometimes switch from backlog/sprints to Kanban reporting when we near the end of delivery for simplification.
We utilize MoSCoW prioritization (Must, Should, Could, Won’t), targeting the completion of all Must and Should tasks. Could and Won’t tasks are bottom of the backlog (but can be reprioritized) and will be completed if time and budget permits.
We’ve developed our system, but what about those other systems we integrate with? Depending on the change, system integration testing can be lengthy and works better under a waterfall methodology. This ticks the system integration box.
The transition to operational teams for our more mature clients (markets that have been in place 5+ years) that have large IT departments and support their own systems is essential. This also includes transition to the end business users, typically the system and/or market operations teams. We often have secondments from the operational teams onto the project, so they build up their IP by working directly on the software.
We transition throughout the software delivery lifecycle by providing showcases of what we build to get feedback from a wider audience and promote buy-in. Towards the end of the implementation phase, we provide workshops with the IT support teams and end business users and provide documentation on the systems, altering the high-level design document to an “As-implemented” design document.
With less mature clients (markets <5 years), support is often out-sourced while their internal teams are being built to take over this function. Transition is still required but may be to the same company that built the software. PSC has performed this task with a number of clients in the APAC region.
Lessons learned and post-implementation reviews are just good project management. Making the same mistake over and over is madness. Typically, we will either introduce this practice (e.g., set up a lessons learned knowledge base) or work within clients’ existing project close frameworks. When we start new projects, we check the lessons learned knowledge base and ensure we make any required changes to our framework so we don’t repeat mistakes. This produces an evolving and ever-improving framework. Finally, the last transition box is ticked.
This framework has worked for PSC on numerous greenfield and market rule change invoked system changes for our clients in the APAC region. PSC has a reputation for delivering complex software implementation projects for market operators in a timely fashion, meeting strict rule compliance dates. The framework described above is proven to work in our niche industry, but we’re always open to trying new approaches to further improve our software delivery model.