Python Report Generator

Hi all,

I have written a Python application with a GUI that parses and writes the
json response from the API to a CSV file. I have this side working fine
but…

My next hurdle is the paginated results and being able to iterate through
all pages.

Here’s the rather ugly code I have created myself which kind of works but
with very slow and unreliable results:

#URL PAGINATION
pre_number_string = (’?page=’)
page_number = 1
per_page = (’&per_page=100’)
pagination = pre_number_string + str(page_number) + per_page

/// START OF REPORT REQUEST ///

    def getUserData():

        c = pycurl.Curl()
        c.setopt(pycurl.URL, "%s" % str(freeagenturl))
        c.setopt(pycurl.HTTPHEADER, [
            response,
            'Authorization: Bearer '+access_token[3],
            'Accept: application/json',
            'Content-Type: application/json'
            ])
        c.setopt(pycurl.HEADERFUNCTION, headerresponse.write)
        c.setopt(pycurl.WRITEFUNCTION, buf.write)
        c.setopt(pycurl.VERBOSE, True)
        c.setopt(c.SSL_VERIFYPEER, 0)
        c.perform()
    getUserData()

    #HEADER RESPONSE FOR PAGINATION
    header_response = headerresponse.getvalue()
    headersplit = header_response.split(';')
    headercounter = 1

    #LOOP THROUGH PAGES
    if reporttypeholder == 'expenses':
        headerint = int(headersplit[2][57:-14]) #THIS OBTAINS THE TOTAL NUMBER OF PAGES FROM THE PYCURL HEADER
        while headercounter <= headerint:
            page_number = page_number + 1
            pagination = str(page_number) + per_page
            if projectnumberholder == "":
                freeagenturl = "https://api.freeagent.com/v2/" + "%s" % str(reporttypeholder) + pagination + "%s" % str(reportstatusholder)
            else:
                freeagenturl = "https://api.freeagent.com/v2/" + "%s" % str(reporttypeholder) + pagination + "%s" % str(reportstatusholder) + "%s" % str(projectnumberurlprefix) + "%s" % str(projectnumberholder)
            getUserData()
            headercounter = headercounter + 1

    reportresponse = buf.getvalue()
    # /// END OF REPORT REQUEST ///

Does anyone know of a module that will carry this out in a more
Pythonic/cleaner way? Or can anyone show how i can clean my code up? Or a
more official Freeagent solution?

Thanks,
Karl

Hi Karl,

I wrote up a small Python script to request data from the FreeAgent API a
few days ago. It uses the requests module (
Requests: HTTP for Humans™ — Requests 2.31.0 documentation) which makes the whole HTTP
thing very easy. The script doesn’t include any token retrieval stuff so
you will need to already have an access token to use it. It doesn’t do
pagination (yet) but from what I’ve seen of the requests module it’ll be
much easier than PyCURL seems to make it. Hopefully it might be of some
help.

Here’s the script: A simple example showing how to request details for a company and create a contact using the FreeAgent API in Python. · GitHub

Regards,
Harry MillsOn Thu, Jan 16, 2014 at 9:23 AM, Karl Sayle karlsayle@gmail.com wrote:

Hi all,

I have written a Python application with a GUI that parses and writes the
json response from the API to a CSV file. I have this side working fine
but…

My next hurdle is the paginated results and being able to iterate through
all pages.

Here’s the rather ugly code I have created myself which kind of works but
with very slow and unreliable results:

#URL PAGINATION
pre_number_string = (‘?page=’)
page_number = 1
per_page = (‘&per_page=100’)
pagination = pre_number_string + str(page_number) + per_page

/// START OF REPORT REQUEST ///

    def getUserData():

        c = pycurl.Curl()
        c.setopt(pycurl.URL, "%s" % str(freeagenturl))
        c.setopt(pycurl.HTTPHEADER, [
            response,
            'Authorization: Bearer '+access_token[3],
            'Accept: application/json',
            'Content-Type: application/json'
            ])
        c.setopt(pycurl.HEADERFUNCTION, headerresponse.write)
        c.setopt(pycurl.WRITEFUNCTION, buf.write)
        c.setopt(pycurl.VERBOSE, True)
        c.setopt(c.SSL_VERIFYPEER, 0)
        c.perform()
    getUserData()

    #HEADER RESPONSE FOR PAGINATION
    header_response = headerresponse.getvalue()
    headersplit = header_response.split(';')
    headercounter = 1

    #LOOP THROUGH PAGES
    if reporttypeholder == 'expenses':
        headerint = int(headersplit[2][57:-14]) #THIS OBTAINS THE TOTAL NUMBER OF PAGES FROM THE PYCURL HEADER
        while headercounter <= headerint:
            page_number = page_number + 1
            pagination = str(page_number) + per_page
            if projectnumberholder == "":
                freeagenturl = "https://api.freeagent.com/v2/" + "%s" % str(reporttypeholder) + pagination + "%s" % str(reportstatusholder)
            else:
                freeagenturl = "https://api.freeagent.com/v2/" + "%s" % str(reporttypeholder) + pagination + "%s" % str(reportstatusholder) + "%s" % str(projectnumberurlprefix) + "%s" % str(projectnumberholder)
            getUserData()
            headercounter = headercounter + 1

    reportresponse = buf.getvalue()
    # /// END OF REPORT REQUEST ///

Does anyone know of a module that will carry this out in a more
Pythonic/cleaner way? Or can anyone show how i can clean my code up? Or a
more official Freeagent solution?

Thanks,
Karl


You received this message because you are subscribed to the Google Groups
“FreeAgent API” group.
To unsubscribe from this group and stop receiving emails from it, send an
email to freeagent_api+unsubscribe@googlegroups.com.
To post to this group, send email to freeagent_api@googlegroups.com.
Visit this group at http://groups.google.com/group/freeagent_api.
For more options, visit https://groups.google.com/groups/opt_out.

Hi Harry,

Nice looking script. I did look at the requests module originally but
personally found Pycurl easier to read/work with. I have got all that side
working perfectly as well as the refresh token. The problem is, like
yourself, working around the pagination to ‘catch all’ the lines from the
API.

The loop I created to attempt to re-query the API does work and you can
watch it changing the page number but the script either crashes or gives a
huge, virtually unusable variable:

reportresponse = buf.getvalue()

Without using pagination this works really well including the GUI.

If you’d like to see my script for the token refresh please let me know.

Thanks again for your help,
KarlOn Thursday, January 16, 2014 9:57:47 AM UTC, Harry wrote:

Hi Karl,

I wrote up a small Python script to request data from the FreeAgent API a
few days ago. It uses the requests module (
Requests: HTTP for Humans™ — Requests 2.31.0 documentation) which makes the whole HTTP
thing very easy. The script doesn’t include any token retrieval stuff so
you will need to already have an access token to use it. It doesn’t do
pagination (yet) but from what I’ve seen of the requests module it’ll be
much easier than PyCURL seems to make it. Hopefully it might be of some
help.

Here’s the script: A simple example showing how to request details for a company and create a contact using the FreeAgent API in Python. · GitHub


Regards,
Harry Mills

On Thu, Jan 16, 2014 at 9:23 AM, Karl Sayle <karl...@gmail.com<javascript:> wrote:

Hi all,

I have written a Python application with a GUI that parses and writes the
json response from the API to a CSV file. I have this side working fine
but…

My next hurdle is the paginated results and being able to iterate through
all pages.

Here’s the rather ugly code I have created myself which kind of works but
with very slow and unreliable results:

#URL PAGINATION
pre_number_string = (‘?page=’)
page_number = 1
per_page = (‘&per_page=100’)
pagination = pre_number_string + str(page_number) + per_page

/// START OF REPORT REQUEST ///

    def getUserData():

        c = pycurl.Curl()
        c.setopt(pycurl.URL, "%s" % str(freeagenturl))
        c.setopt(pycurl.HTTPHEADER, [
            response,
            'Authorization: Bearer '+access_token[3],
            'Accept: application/json',
            'Content-Type: application/json'
            ])
        c.setopt(pycurl.HEADERFUNCTION, headerresponse.write)
        c.setopt(pycurl.WRITEFUNCTION, buf.write)
        c.setopt(pycurl.VERBOSE, True)
        c.setopt(c.SSL_VERIFYPEER, 0)
        c.perform()
    getUserData()

    #HEADER RESPONSE FOR PAGINATION
    header_response = headerresponse.getvalue()
    headersplit = header_response.split(';')
    headercounter = 1

    #LOOP THROUGH PAGES
    if reporttypeholder == 'expenses':
        headerint = int(headersplit[2][57:-14]) #THIS OBTAINS THE TOTAL NUMBER OF PAGES FROM THE PYCURL HEADER
        while headercounter <= headerint:
            page_number = page_number + 1
            pagination = str(page_number) + per_page
            if projectnumberholder == "":
                freeagenturl = "https://api.freeagent.com/v2/" + "%s" % str(reporttypeholder) + pagination + "%s" % str(reportstatusholder)

            else:
                freeagenturl = "https://api.freeagent.com/v2/" + "%s" % str(reporttypeholder) + pagination + "%s" % str(reportstatusholder) + "%s" % str(projectnumberurlprefix) + "%s" % str(projectnumberholder)

            getUserData()
            headercounter = headercounter + 1

    reportresponse = buf.getvalue()
    # /// END OF REPORT REQUEST ///

Does anyone know of a module that will carry this out in a more
Pythonic/cleaner way? Or can anyone show how i can clean my code up? Or a
more official Freeagent solution?

Thanks,
Karl


You received this message because you are subscribed to the Google Groups
“FreeAgent API” group.
To unsubscribe from this group and stop receiving emails from it, send an
email to freeagent_ap...@googlegroups.com <javascript:>.
To post to this group, send email to freeag...@googlegroups.com<javascript:>
.
Visit this group at http://groups.google.com/group/freeagent_api.
For more options, visit https://groups.google.com/groups/opt_out.