+2
Answered

Burn down / up for the current sprint

ruald ordelman 11 years ago updated by Matthew O'Riordan (Founder) 11 years ago 8

It looks like that the 'Stats' tab only works after a completed sprint. This implicate that the stats tab always works for the last sprint and not the current.


Answer

Answer
Answered

We have identified the issue and fixed it now.

Under review

Yup, that is currently the case, we do not provide status on the current sprint.  What would you expect to see for the current sprint?

For the current sprint I expect a burn down. During the sprint we used the status 'todo, accepted, completed'. We thought that was for the live burn down.

Hi Ruald


Sorry for the slow reply, but we have in fact added this request to our backlog.  Unfortunately it's not very high priority right now, but we will introduce this at some point.


Matt

Dear Matthew,


Is there an option to get the information out of the system? For example by using the API access? What I need is the total amount of stories in a sprint and for each day in sprint the completed amount. That's all I need.


With that we will try to create a burn down for ourself.


Ruald

There sure is.  You can access the API at http://easybacklog.com/api


Shout if you need any more information.


Matt

Dear Matthew,


I've found almost all the information, except the 'Story Shortcut'.information. That Rest call gives me a error '<title>We're sorry, but something went wrong (500)</title>'.


Hopefully you can fix it soon, so I can complete our burn down.


Ruald

Hi Ruald


Please can you send me a CURL request that replicates this issue.  If you don't want to share this in the public forum, please email me private at support@easybacklog.com.  We will get this fixed if there is a problem.


Matt

Dear Matt,


I've found the problem. I passed a wrong story_id. May be you can check on it and pass me a functional errormessage.


Ruald

Can you give me an example of an invalid request as you should receive a valid error message?

Answer
Answered

We have identified the issue and fixed it now.