Friday 28 March 2014

Email To Hoist

I mentioned in jest to the Hoist Product Manager that there should be a way that you can email data to Hoist, but my mind would not let it rest.

So I bring to you a not so finished method of sending an email to get data into Hoist.

https://bitbucket.org/andrewcox/email2hoist

There was nearly some code reuse from mailthrottler so I am very pleased with myself.

Now if I can just find a use case!

Saturday 25 January 2014

Creating a simple voting site with Hoist

I needed to create a simple site (2 page) that would let people vote on when my baby would be born.
Not wanting the trouble of hosting the database or creating the api to store the data, I turned to Hoist

With the easy to use javascript library found at https://github.com/hoist/hoist-js I was quickly able to create the voting app I needed. 

Here how:

To Vote:
var rawvote = {}
rawvote.date = $("#date-here").text();
rawvote.voterName = $("#voter-name").val();
rawvote.babyName = $("#baby-name").val();
Hoist.apiKey("MYAPIKEY");
Hoist.post("vote", rawvote, 
  function (data) 
  { 
    window.location.href = "/voted.html";
  });

To get the Votes back:
Hoist.apiKey("MYAPIKEY");
var votes = Hoist("vote");
votes.get(function(data) { 
  html = []
  for(var i=0;i<data.length;i++) {
    html.push('<div class="row">');
    html.push('<div class="col-md-6">'+ data[i].date + '</div>');
    html.push('<div class="col-md-6">'+ data[i].babyName +'</div>');
    html.push('</div>); 
    }
  $("#results").append(html.join(""));
  });
As simple as that!

Friday 25 October 2013

New Website Content

Just pushed a new version of  www.civet-labs.com live, just more content than the last parked version.

Some interesting things for us all to remember:

One:

Always have someone test your work! (Thanks James) As you are bound to forget something. 

Two:

Always have a deploy script!  It makes life so much easier.

Here is the exceptionally simple one to release www.civet-labs.com to s3.  Using boto (again)

import boto.s3.connection
import boto.s3.key
import os, sys

bucket_names = {"peek":"peek.civet-labs.com", 
           "live":"www.civet-labs.com"}

default_name = "peek"
name = ""
if len(sys.argv)>1:
    name = sys.argv[1]

if name not in bucket_names:
    name = default_name

def s3Callback(bytes_transmitted, bytes_total):
    print "\t",bytes_transmitted, bytes_total

conn = boto.s3.connection.S3Connection()
bucket = conn.get_bucket(bucket_names[name])
print "Deploying to: ", bucket

filenames = map(lambda x:os.path.join("..",x), os.listdir(".."))
for filename in filenames:
    if os.path.isfile(filename):
        print filename
        key = boto.s3.key.Key(bucket, os.path.split(filename)[1])
        key.set_contents_from_filename(filename, 
                                       cb=s3Callback,
                                       policy='public-read'
                                       )
Three:

twistd.py web --path=.

runs a much better web server than

python -m SimpleHTTPServer

for really simple testing of a static site, if you have twisted installed.

Tuesday 22 October 2013

Twisted Server on AWS EC2

This is a post on how I built a script to provision a AWS EC2 instance for a twisted server, for those that just want the finished script it is in my bit bucket repo here with instructions on how to use it - https://bitbucket.org/andrewcox/provisiontwistedserver

Wanting a public facing version of MailThrottler I started to look to see if there was an easy way to deploy it, like a web site on Heroku or AWS elastic beanstalk.

Looking at beanstalk it seemed easy, but I started hitting roadblocks due to the fact I wanted to run an non web TCP server.  So moving onto Cloud Formation I was quickly lost in a world of Chef and Ruby.  At this point I went to bed.

The next evening after a little more digging, I thought why not just do it all myself by creating a EC2 instance, using boto and fabric.

Boto is Amazon's python package that provides interfaces to Amazon Web Services, and Fabric is a python library for streamlining the use of SSH.

With Boto I could launch the instance and wait for the instance to be up and reachable by ssh, then with Fabric I could run the shell commands over ssh need to provision the mailthrottler server.

Starting in earnest I found that the actual launching of the server is easy.  Once you have the image id that you want, a keypair and a security group you can launch to your hearts content.  Though you will have lots of servers with nothing on them. 

The next challenge is to wait for the server to be reachable before you can ssh to it.  It is not enough to just wait for the instance to be running, you also have to wait for both status checks to be green lighted.  For a strange reason sometimes the response wouldn't give the correct object back, so that is why you will see a dummy one in the code.

After we have the instance launched and running fabric made it ridiculously easy to run the commands I needed, it is just a matter of setting the correct hosts and keyfile, then calling run() or sudo() as needed. 

The only real problem I had after that is fabric was exiting from calling the twistd command to quickly, so after some poking round I found it was just best to tack a && sleep 5 onto the end and everything was great. 

Like I said at the start of the post the script itself is located at https://bitbucket.org/andrewcox/provisiontwistedserver it has a readme with instructions on how to use it.

In the end the script took me about 2 evenings to write and cost a grand total of $0.20 in AWS fees

Let me know if it works for you!

Friday 12 April 2013

Heroku Scheduler and the UnicodeEncodeError

Ruminate runs a scheduled task on Heroku every hour to grab the new entries from the configured news feeds.

I was noticing that some feeds were not updating, checking the logs I saw that I was getting a:
'ascii' codec can't encode character u'\u2022' in position 49: ordinal not in range(128)
This was odd because if I ran it from the command line on my machine it worked, and even stranger when I ran it with the heroku run it also worked.

Turning to Google I found this answer on StackOverflow - http://stackoverflow.com/a/11762169/27907

The long and short of it is that the print statement will happily default to ascii in python, if not told otherwise.  I am guessing that when run from my console it picks up that it is utf8, but when running detached it defaults back to ascii.  As the answer states this is an easy fix by just setting enviroment variable PYTHONIOENCODING to the codec that you want.  In this case utf8.

Running the following command sets the environment variable on the heroku app to the correct value:
heroku config:add PYTHONIOENCODING=utf8
This fixed the issue, and got my missing entries into the app. 

Hope this helps.

Tuesday 2 April 2013

Announcing Ruminate


A week ago I challenged myself to write a simple Google reader clone. 

The reason being that I wanted to write a non-trivial piece of python code, and document the creation process.

It is now in version 0.1.  I have released it onto heroku and it works for my reading needs. 

You can see the Trello board at https://trello.com/b/ZqZWaSlr

and the code here https://bitbucket.org/andrewcox/ruminate

Currently Ruminate really only works for me, but in the coming weeks I hope to add more functionality (as seen on the trello board) if you want access find me at @vipox or follow the instructions on how to set it up on heroku yourself.

In the coming days I hope to blog the development of the app!

Wednesday 13 March 2013

Installing .Net 1.1 on a AWS Windows 2008 machine

I am trying to get the .Net 1.1 applications that I work with installed on a AWS Windows 2008 instance so that it can act as a development server. 

After installing .Net 1.1 our windows services install and run OK. 

The problem comes when you try to run an ASP.Net application.  Searching I found this blog post, which has great instructions on how to get .Net 1.1 installed.

So big thanks to Ivor Bright, after following your instructions and restarting I was away laughing. 

Now I just have to work out if I cleared out to much of our database before I restored it into AWS.