Just a short post to let everyone know that my freelancing has now launched as an official LLC. You can check it out over at J Walsh Creative. Bear with me while I work through getting the site up to par. The nice thing is that most all of the back end items are taken care for the business. That means I’m ready and able to take on client work as well as look for some speaking opportunities. If you have either of those fee free to contact me!
I’m currently working on a new design for my site and it is going to show recent stories I’ve saved in Readability. Readability, if you are not familiar, is at its core a site that allows you to save webpages for reading later. You can then read and archive them on the Readability site. They have apps on the various platforms, and may applications are adding the ability to save links directly to Readability so you can read them later. I’ve found this more useful than some of the other various readers/social bookmarking sites. In the new design I wanted to share what I have been reading lately.
In working with a few other APIs, they eventually add rate limits, or get slower to respond to requests as their user base grows. To avoid these issues I decided to cache calls to the Readability API instead of doing them live on the site, client side. It isn’t anything terribly complicated, but I’ll review the code for the script.
#!/usr/bin/python import readability from datetime import date, timedelta from sqlalchemy import *
These are the required libraries for the script.
''' readability config ''' token = readability.xauth('read_username', 'read_apikey', 'read_email', 'read_pass') rdd = readability.oauth('read_username', 'read_apikey', token=token)
These lines are the config for the oauth call to the Readability API. You’ll need to acquire your API Key from the Readability site and then substitute all the parts here.
''' sqlalchemy setup ''' engine = create_engine('mysql://db_username:db_passwd@db_host/db') connection = engine.connect()
In searching and talking to some people it looked like SQLalchemy was going to be the best way to get Python to work with MySQL. This library made this script much easier than I expected.
''' Fetch new bookmarks since yesterday ''' yesterday = date.today() - timedelta(1) for b in rdd.get_bookmarks(added_since=yesterday.strftime('%m-%d-%y')): result = connection.execute("INSERT INTO bookmarks VALUES(NULL, " + str(b.id) + ", '" + b.article.title.encode("utf-8") + "', '" + b.article.url.encode("utf-8") + "')" )
For your initial pull of your bookmarks into your database you would just need to change the for statement to “rdd.get_bookmarks()”. The data that is saved in the database, as you can see, is the readability bookmark id, the title and the URL for the story. There is more available in the API, but this should be sufficient to display a ‘currently reading’ list in a widget on a webpage.
The end of this would then be to schedule this script in cron to run once a day. If you find yourself saving bookmarks more often, you may want to update these schedules.
The database structure and code are on Github
Web hosting can be a scary discussion for some people. People that create websites come in all shapes and sizes with all kinds of backgrounds. To some people running your own server and getting it setup can be a bit overwhelming. That is why many of the hosting companies out there that offer ‘one click installs’ of popular web software and various easy setups are doing so well. Those have their place, and I’ve used some that are very nice. I started to outgrow them and slowly realized I was paying for add-on features that I was never using and was better off moving to something different. It was time to go back to managing the entire server with no extra bells and whistles.
I have been a fan for Amazon’s AWS for quite some time now. Previously I was just using it for backups of my websites and for offsite storage for my Pictures and other files. Like many others I have been using Dropbox for awhile now. Having my own Amazon S3 account and Dropbox seemed a bit redundant, since Dropbox is S3 on the backend, and I’m already paying for Amazon S3. I like the thought of having more control over my data, and Dropbox has been hacked in the past. I started looking into different ways to leverage S3 storage on my Mac laptop and my PC.
In all of the Twitter teams recent updates this week, one of them was to expose their new oEmbed API. This is a rate limited service meaning that they will throttle and potentially blacklist you if you hammer them with requests. The workaround for this is to cache the results. In this article we will create a simple caching system for this API. This caching system can be extended for other oEmbed API’s out there. The oEmbed site has a list of some of the others.
In my previous post I mentioned that I would update the script to allow for incremental backups of the filesystem and MySQL. Depending on your ability to alter your MySQL config, the manner at which I went about the incremental backups may not work for you. It turned out I had to change parts of the script dramatically so I decided to create a new post with the changes.
Recently I migrated to a new server at Media Temple, with a new server comes a new backup script with a couple new features this time around. The new backup script will backup all of your files relating to your websites, as well as run mysqldumps on all the databases that you need. The script will then FTP the files offsite, as well as save them to the Amazon S3 cloud storage.
One of the most popular URL Shortening Services is Bitly. In lieu of writing my own URLSS or using a prepackaged solution I decided to leverage the Bitly API and their engine for the URL shortening required for one of my projects. I needed to write up a quick and dirty function that would take the required inputs and give me the Bitly shortened URL for my web application.
The other day at work I had a colleague ask if there was a way to check a group of ESX servers NTP configuration through Powershell. After searching around and thinking for a bit I found the Get-VMHostNtpServer commandlet. This in a loop will allow us to go through an entire group of servers to find out the hosts NTP information.