I apologize that this post is being made a little later than I would have liked, unfortunately when this issue was originally raised, the length that the endpoint was going to be down for wasn’t as short as we had hoped.
Just to give a bit of a shortened version of events just to let you know what the state of play is concerning it:
On 2 November the ESI endpoints for the market were essentially getting hammered so hard that it was spiking the CPU Use to 100% between 12:15 and 12:30. Just shy of 100 IP addresses had to get banned to restore it down to normality.
On 3 November, between 12:15 and 12:30, the same thing happened, except it was a different set of IPs and they were all a part of AWS.
When it happened again on 4 November, the decision was made that the endpoint was to be taken down due to the constant action it required each day and the subsequent performance issues it was causing.
The endpoint is going to likely have some degree of redesign or auth being added to it before the endpoint will be made available again, the current extent of this work and the teams involved are still being scoped out.
Once I have more details of what’s involved, or if I have any new developments to share with regards to it, I’ll relay that here.