Tag Archives: Sun Finder

How Flask, Heroku & Alembic Play Together

I just spent a couple days getting up to speed on database migrations in general, how to make it work with Flask and Postgres and how to make them work on Heroku. There is some information out there but it took a little time hunting down what I needed; thus, I’ve summarized some of the main steps to help get others up and running with Flask, Heroku and Alembic.

Why migrate? Best practice is to avoid recreating your database(s). Usually you just want to make changes to the existing database(s) and track those changes. If at any time you need to go back to a previous version, the migration docs will help you easily revert to an old version / schema and then upgrade back to the most recent depending on your needs.

Migration Types: They are usually discussed as either schema (structure of the database) or data (the stored stuff – aka creamy filling). Sometimes they take place at the same time and sometimes not.

A Flask Migration Package Option: Alembic

Previous posts included how much I leveraged Flask Mega Tutorial for building a web application (app). In regards to migrations, Flask Mega primarily focuses on SLQLite which is not as helpful because Postgres is needed for Heroku deployment.

Alembic is a migration tool that is better maintained than the sqlalchemy-migrate package, and it is from SQLAlchemy’s author.  There is documentation on how to setup and run database migrations. I’ve listed out some of the main, basic steps needed to setup alembic, migrate revisions and run it all on Heroku below. This is assuming you’ve already created a database locally as well as added it on Heroku and promoted it as the default.

Where there is a $ or => the words following should be run in the command line and yes, these directions are based on Mac. 

How to Start

  1. Install alembic $ pip install alembic
  2. Add to requirements $ pip freeze > requirements.txt
  3. Initialize it inside your project root folder $ alembic init alembic
  4. Ignore the .ini file for this basic installation
  5. Change env.py with directions at this link.  If the app is in the Flask Mega structure then just replace with app but make sure your Config file is setup for postgres:

import os
if os.environ.get(‘DATABASE_URL’) is None:
SQLALCHEMY_DATABASE_URI = ‘postgresql://localhost/<db_name>’

  1. Create first revision $ alembic revision -m “First revision.”
  2. Find and add change scripts for upgrade and downgrade to the new revision file
  3. Migrate $ alembic upgrade head
  4. Repeat 6 – 8 for further local revisions and migrations
  5. Revise Procfile:

migrate: alembic upgrade head
upgrade: alembic upgrade +1
downgrade: alembic downgrade -1

  1. Git add all changes $ git add .
  2. Git commit $ git commit -m “Procfile and running revisions>”
  3. Push to Heroku $ git push heroku master
  4. Run alembic migrate on Heroku $ heroku run alembic upgrade head

If you get something like the following then it went well:

Running `alembic upgrade head` attached to terminal… up, run.****
INFO [alembic.migration] Context impl PostgresqlImpl.
INFO [alembic.migration] Will assume transactional DDL.
INFO [alembic.migration] Running upgrade None -> *********, Create account table
INFO [alembic.migration] Running upgrade ******** -> ********* Add zoomlevel to locations.
INFO [alembic.migration] Running upgrade ******** -> *********, Test add favorite.
INFO [alembic.migration] Running upgrade ******** -> *******, Test add favorite.

To double check changes went through on Heroku, here are a couple commands:

  1. Launch Postgres interactive environment on Heroku $ heroku pg:psql
  2. Look at tables => \dt
  3. Look at table schema => \d <table name>

That should cover it to get started. Creating a revision file and migrating locally as well as on Heroku are steps that should be repeated for each new migration.

As always there is more info out there for nuances and complexities to migration. There is also the autogenerate functionality that can automatically define change scripts for things like a schema change, but it is limited in what it can do. Check that out in the reference documents. No matter what, I recommend always taking a look at the revision file just to make sure it will do what you need. And have fun migrating.



Deployment is not the Devil (Flask & Heroku Tips)

On a previous post I claimed deployment is the devil because after several successful (not always easy) deployments, pushing up my Sun Finder app proved elusive. I seriously wanted to scratch my eyes out at times with all the errors and issues. Still it was a good learning experience (one that I fought against but a good one all the same), and I did finally deploy as of last week! Check it out at sunfinder.io.

So deployment isn’t all bad but it sure can be frustrating if there is a ton more work that is needed to deploy after the long haul of web app development.

To help others new to deployment and especially working with Flask, here are some things I learned along the way.

First off, if you are working with Flask then use this help page as a starting point on how to develop and setup apps on Heroku.  On the left side of the page, there are links that provide similar support for other frameworks. Just make sure to setup a repository to store your project and configure remote access.

1. Local Database

The biggest deployment challenge I had was importing a pre-populated database from my personal computer (local).

Heroku provides a Postgres add-on from Heroku Postgres which adds a remote database onto the application. When a remote database is provisioned, a config variable is assigned to the repository which usually includes a color in the name. This variable is a reference to the URL where the empty database is accessible.

DB Remote Storage Option
In order to load a pre-populated database, it has to be stored remotely in a service provider like AWS S3 (Amazon Web Services Simple Storage Service) or Dropbox and then imported to Heroku. S3 is a popular storage solution because its been around for a while and is known for its optimization. I haven’t looked into Dropbox as a solution, but I suspect its a good option as well.

High-level directions on how to setup S3 for Heroku are provided at this help page. AWS also provides pretty extensive usage directions.

IAM: User & Permission Setup
After signing onto AWS, go to the IAM section under Deployment & Management section. In this area, setup a user and obtain security credentials. Make sure to capture the credentials  (Access Key ID and Secret Access Key) for later reference. Then setup a group and assign access permissions. Go for the admin permissions setup if its just one person. Once the group is created, click on it and look below the tab section for the option to add a user. Make sure to add the user to the group.

S3: Bucket Setup
From IAM, return to the main section by just clicking on the box in the top left corner and choose S3 under Storage & Content Delivery section. In this area, create a bucket (aka root folder) to store the database. Make sure to create the bucket in the region where the app is stored which for Heroku is the US Standard region. This is important because S3 is free for in region data transfer rates. Also, make sure the group is granted access permission to this bucket. At this point the database can be uploaded.

DB: Compression,  Upload & Configuration
In order to upload a database, compress the local copy first which is also referred to as dumping. Heroku’s directions on how to create a compressed version of the file can be found at this page.  Create the dump file and open the AWS bucket in order to access the upload option. Once the file is uploaded, add the AWS security credentials into Heroku configuration following the directions on the Heroku S3 help page.

DB: Import
I followed the import directions on Heroku’s help page that explained compression but there were a number of errors (like “invalid dump format: /tmp/…/sunfinder.dump: XML  document text”). In order to resolve these problems, I logged into the Heroku Postgres site.  It showed all of my Postgres provisioned databases and when I clicked on one of the databases, I was able to see connection settings and statistics. There is an icon of two arrows pointing in opposite directions in the top, right corner that provides a list of additional options. There I clicked on PG Restore and found more explicit command directions for import. The only part of the command that needed to be changed is to swap “your data file” with the dump file name that is inside the bucket. This resolved my errors and enabled database setup.

Just remember that any changes made to the local database need to be compressed, uploaded onto AWS again and imported in order for it to be seen in the remote application.

2. Static Content

When I first read the Heroku S3 help page, I mistakenly thought I had to store all of my static content on S3 (e.g. img, js, css). Granted previous deployments seemed to work and not require this, but I couldn’t get the css and js files to load correctly on my application.  I was getting a 403 error with a link to an XML page that said “Access Denied”. 

So I loaded all the static content on AWS S3 and made it public. This actually made the application work once I changed the static file references to the new AWS location and links. Then I finally figured out that the Heroku application key configuration was incorrect and thus, the problem. So I rolled back my changes to keep the static reference links internal vs. pointing at AWS.

Using AWS to store and reference static files is more useful in situations where there is a significant amount of content and/or users are loading content onto the application. There is a lot of literature out there that provides more details.

3. Heroku Configuration & Updates

In the process of setting up the Heroku repository, don’t forget configuration. It can make things go wonky like 403 errors if its wrong. In the errors around static page loads, my configuration of the Flask app secret key was incorrect on Heroku. Definitely read this link and make sure to load all the secret keys that are needed.

Additionally, make sure to git add and commit changes in order to push updates to Heroku. If a change doesn’t show on the remote site, it’s possible that it wasn’t committed before the Heroku push.

4. Not All Browsers are the Same

Another error/warning I found was in the he Chrome browser’s Inspect EIement console: “The page at … displayed insecure content for ….”. The dots represent a link that the warning referenced. My current hypothesis is this is because I loaded HTTPS Everywhere on my computer and some of the links in my site point to sites that do not use secure socket layer protection. Its just not an option at some sites. This is just a warning and does not prevent my application from functioning. If I learn more, I will update this post.

One thing that I was reminded of while troubleshooting my app is that not all browsers function the same way. So I opened my app in a different browser to check if some errors and warnings would go away. Its just one more way to test the application and help narrow down the problems. Granted there are many browsers and versions of browsers that can impact functionality and plenty of materials on how to develop for all those variations.

5. When All Else Fails – Reboot

Initially I made the first Sun Finder deployment attempt in June. When I returned to deployment in Aug., I tried working with the already established Heroku repository and configuration. At some point in my error tackling, I realized its better to just reset and restart. So I deleted the Heroku repository and created a new one. This didn’t resolve all my errors, but it did help clear out some of the more mysterious ones (e.g. the ones unknowingly created during the learning process).

For those out there working on Heroku deployment, I still stick by a previous post comment that Rails is an easy experience, but it is very doable to achieve deployment with other frameworks. If anything it can be a little more educational at times. Just don’t let the challenges keep you from that final hill to launch.

Flask Mega Tutorial, Sun Finder and SheCodes

It has been a busy several weeks and I’ve written just as lengthy of a blog post as last time from all of it. After deploying my Rails app, I switched gears to refocus on Sun Finder (through an indirect route). I signed up to present the app at SheCodes Conference on August 9th (yesterday) and I wanted to spiff it up a bit.

Flask Mega Tutorial (detour)

Before going back to Sun Finder, I spent a week finally going through the Flask Mega Tutorial which I had wanted to do since March. I highly recommend it because it goes from setting up a virtual environment through full stack development to a variety of deployment options. Going through that after the Rails tutorial was valuable because it solidified common concepts around web application structure (e.g. configuration, app instance setup, db setup and integration, where to delineate between view and controller). And of course it was helpful to contrast the differences to better understand how much Rails does for you behind the scenes.

New Stuff
The tutorial was great to help me work through a full Flask implementation on Heroku. I wanted to go through this so I would be able to better navigate how to finally deploy Sun Finder. Additionally, Flask Mega covered a couple concepts that I hadn’t seen yet and I was pretty excited to learn:

OpenId allows using one username and password to login to multiple websites. This is also known as decentralized authentication standard. Flask provides a package for easy integration, and the benefits of course are to simplify the number of logins that users need for all these websites. To note, OpenId is not OAuth which is another login concept that sometimes gets confused with OpenId. They can be used together or separately. OAuth authorizes one website to have access to another website’s data about a user (e.g. Facebook and Spotify) while OpenId is just a single login sans data sharing.  Some key benefits of logging in with these applications are you don’t have to deal with password storage security, validation and resets. Its basically outsourcing your site’s login.

FlaskBabel is a package that determines the primary language (e.g. Spanish) set in the client’s browser and then displays the site in that language. Granted there is some setup including translation efforts to get this to work, but once it is setup, the web application becomes multi-lingual.

Moment.js is a more user-friendly date time rendering library that uses the client’s browser to track and display time based on her/his time settings. This is a better way to display time because it will adjust to user preferences from zone to whether to use a 24 hour clock and/or the order of month, day, & year. The server-side can store events based on utc timestamp, but when displaying date time on the client’s browser, the utc timestamp will be converted by Moment.js.

Coverage.py measures code coverage by providing an easy to use report that notes which parts of the code have been tested.  This is such a great package to use because figuring out the balance of how much to test is tough and this really gives a good understanding of where there are gaps to help pinpoint what tests to add. It’s also really easy to add in this package.

Last on my list, the tutorial went over how to set up your own server. I actually skipped this section for now because I needed to get back to Sun Finder and Miguel warned it would be a long chapter for the uninitiated.  I’m definitely going to complete that chapter because as always I want to learn about everything and I think setting up a personal server sounds fun.

On the whole the tutorial was great to reinforce concepts I’ve learned so far as well as expand my exposure to new ways to work. I’m a big believer in practice makes perfect in this space and going through this process as much as possible will hone skills over time.

Return to Sun Finder

So I finally opened up Sun Finder again and I really hated it when I got back to it. I could see how much I didn’t know when I had written it. I felt like there were so many obvious flaws that it’s a wonder I got any interviews at all after career day.

After hating on it for a bit and having a hard time reviewing where I left off, I talked with a few people who have been in the industry for a while about the fact that this is typical. We write code, we learn and we see all the flaws in what we wrote before (we also wonder what we were thinking when we wrote it) because we are always learning and growing. So my focus needs to be on how far I’ve come. I mean really, I did start coding in late Feb. Building a full stack app in 4 weeks between April and May was and is impressive no matter how noobie it its. And the fact that I saw so many ways to improve it reinforced how much I have learned.

Standard App Structure
I did find that I wasn’t afraid to completely rip up what I had started with and rework the whole thing. That used to be a problem for me back in May because changes completely threw me since I didn’t have as solid grasp on site structure and functionality. This time I literally overhauled my project to align the Flask Mega structure so it would function like other apps and be setup for deployment.

Reworking the structure was a challenge because Flask Mega recommended using an extension package that preconfigured SQLAlchemy vs. direct interaction (my setup back in May). The main difference is that the extension takes care of some of the configuration especially running and maintaining the database session (accessing and storing data). A little more specifically, the package generates a SQLAlchemy object when the application is passed into it. Using the package vs. working directly with SQLAlchemy gives access to all the same functions, a preconfigured scoped session, the engine and a declarative base that is a configured Model baseclass with a query attribute. To note, the session still needs to be committed when working with a database, but it doesn’t need to be removed at the end of a request.

Additional tricky bits included the fact that Flask Mega focused on applying SQLite throughout the tutorial and I already had Postgres setup with my project. SQLite is directly integrated into the web application for local storage while Postgres works separately and requires an adapter (e.g. psycopg) to integrate with the web app. The main change I needed in my code from Flask Mega was that instead of writing the database reference to a file in my web app folder like below:

  • SQLALCHEMY_DATABASE_URI = ‘sqlite:///’ + os.path.join(basedir, ‘app.db’)

I needed to write it the code to point to the location of my separate postgres database as noted:

  • SQLALCHEMY_DATABASE_URI = ‘postgresql://localhost/sun_finder_db’

A couple clarifying points, the os.path.join is just pulling the directory path for where the application is stored using basedir variable and app.db is the SQLite db file which is the equivalent to the Postgres db file named sun_finder_db. I left the SQLite code the same as what you would see in Flask Meg in case you reference that setup.

Javascript & JQuery:
After restructuring the files and getting my app to work, I started focusing on how to improve views dynamically. I easily spent a couple of weeks beating my head against the JavaScript wall. I read and worked with various documentation including the JQuery site and Code School as well as other resources. Also, I did a lot of trial and error with my code. There were definitely times where I made progress in my understanding and many others where I felt like I was back in the mire of figuring it all out.

During Hackbright, we spent 1/2 week reviewing JavaScript. There is a lot to cover in 10 weeks when learning full stack development and becoming an expert in all of it at once is not doable. Still, JavaScript is a complex beast that takes time to get to know. It is not like other scripting languages and requires practice to understand it. That practice is worth it because it is very valuable and just continues to grow in importance in the web. Similarly, JQuery is just as important and you can consider it JavaScript’s close relative. JQuery primarily provides shortcuts to Javascript code and ensure performance consistency across browsers. I highly recommend taking the time to learn and understand both.

My initial use of Javascript and JQuery in the app really was because I had a lot of help from my instructors and mentors. I didn’t fully understand how the code worked and what it was doing. Spending the time I did over the last couple weeks helped force me to really appreciate whats going on and how to apply the language. I also actually enjoy using it now despite the fact that it still frustrates me any time it can.

One key change is that I applied the Bootstrap typeahead plugin (akin to autocomplete) from a code challenge I did back in June. Typeahead in essence uses an Ajax (asynchronous Javascript and XML) call to pull data from the server without changing the display or behavior of the existing page.

The real improvement came from the fact that I no longer passed the full contents of the database into the view and then looped through it to create and display the list of predictive text in my search bar. In May, I had been really proud when I first built that functionality out because it was what I understood at the time and it worked. This time, I understood how to pass the request from the view through the Ajax call and build a targeted list on the server-side that would feed back directly to the Ajax request and then post to the view.  This is a much more optimized solution especially when I grow out the list of database location names so I only pass a small amount of data vs. everything.

I actually started getting obsessed with Ajax to the point where I wanted to load everything on one page. This is a bit complex and unfortunately, I hit a couple of walls that were too difficult to get past in time for the SheCodes conference. So in the interest of time, I ended up rolling back my one page concept to loading separate views for each request. Basically its a full page load based on most of the links that are pushed. It’s not as elegant or efficient but it works. I also get that I have to try lots of things and sometimes go back to square one before making progress. Its part of the learning process.

Sunrise & Sunset Data:
I pulled out the Forecast.io API because I have known for a while I wanted to focus on WeatherUnder Ground results. In so doing, I managed to lose my data points on sunrise and sunset which I used to help determine whether to show a sun or moon image for the results. Now it seems the easiest thing would be to leave the use of the Forecast.io API, but I wanted to pull it out. I thought it would be easy data to find. However, not as easy as I would expect.

I tried the PyEphem package to calculate the times based on given coordinates. The results unfortunately were not accurate; thus, I switched gears to apply the Earthtools API. In so doing, I had to learn how to parse XML data which is good to learn, but it just was one of those moments of, “there has got to be an easier way to do this.” And fyi, Json is definitely easier to manage.

I applied the BeautifulSoup package to help parse XML. There are many parsers out there. I just picked BeautifulSoup because I was familiar with the name. Still this proved tricky to do and it took some time to realize that the XML response was an object and it needed to be converted to a string in order for BeautifulSoup to process it.  Note, I use the requests HTTP library vs. urllib2 to pull API data. So when I run request.get on the earthtools url, I get back a response object. In order to get to the XML content, I actually have to pass the content attribute on the response object instance. So if I assign what earthtools sends back to a variable name earth_response then I have to pass that variable into the  BeautifulSoup object as BeautifulSoup(earth_response.content) to get it to parse the response.

I added the Bootstrap modal plugin (after unsuccessfully loading the pop-up plugin) to show a Google map when the map icon is clicked. I have further plans for this feature mentioned below.  What I was able to accomplish for now is that I’ve added the HTML5 Geolocation API to pull the coordinates from the client’s browser. These coordinates are used to build the initial map which can be seen when clicking the map icon. It was easy to setup and cool to see in action.

Some additional changes that I made were to stop passing all content to all pages now that I have a better handle on the view setup and what data I needed where. I also started to update the user login information based on what I learned from Flask Mega, but I tabled the further adjustments till after the conference. I fixed view content, improved page and variable names for clarity and added Coverage.py in anticipation of applying tests.

There were a number of changes that I made and what was funny is that despite all that work, the front-end view actually hasn’t changed that much.


Presenting the app at SheCodes was good practice in technical demonstrations, and in general, I really enjoyed the conference with the type of speakers and content covered. I created a couple slides that diagram the high-level MVC (model view controller) setup for my application and the difference technologies and resources that I’ve used so far. Those slides can be found on Speakerdeck.  And if you want to see visual samples of the site, they can be found at nyghtowl.github.io.

I am pseudo proud of it again. I say pseudo because I will always have things I want to improve and it will never be perfect but apparently that is fairly standard with coding. My appreciation for the web app is in the fact that it has shown me how far I’ve come and gives me a space to continue to experiment and grow.

What I haven’t Finished Yet

So I am going to avoid the elephant in the room for a minute and explore it below. In terms of things I want to do, I want to  pull out the current object that organizes the weather data and instead use Ajax and JavaScript to obtain the weather information and pass it directly to the results page. I also want to finish the user login and preference pages to help customize user experience. Additionally, I plan to make the map more interactive from having links that can direct users to results as well as caching and showing weather data points for immediate view. Last as usual but not least is to add tests that will help keep track of my application and give more information on what is breaking and when.

Deployment is the Devil

So deployment is hard and I kinda hate it. I actually am still working on that as we speak because my deployment involves S3 and there is something special you have to do with static assets that Flask Mega didn’t cover. Lets just say that Rails is so much easier for this and I’ve heard the same about Django. Plus, my previous deployments didn’t require the kind of configuration I’m dealing with. My app is such a simple solution that it makes me laugh at the complexity of what I have to do to get it to work. Still I will deploy. Its going to happen come hell or high water, and I will post up what I learned from that experience once I get there.

Sun Finder – Where is it now?

For those following along, I have made very little progress on my Sun Finder application since Career Day about a month ago. Since hindsight is 20/20, I probably could have seen this coming, but I was living in a blissful imaginary world that said: “There will be all this time after I graduate to do all these things”. The reality is that all that time was about to be commandeered by interviews, practice & studying for interview and a bunch of life stuff that you can never plan for.

I do plan to get back to Sun Finder, and I’ve already started the process to get it online for friends who tell me they keep finding themselves in situations where they want to use it. I hit a bit of a snag while going through the posting process, and there are some errors that I need to work through. As soon as that’s done, I’ll share the location.

Now don’t get me wrong, there is still so much to build out and fix on it, and if you look at my code on GitHub you can see a running list of my top items.  Still I want to get something live so I’ve gone through the process and know how to do it. I’m a big believer in putting a stake in the ground and getting something out for consumption. It will never be perfect and others can call out issues for you that you can’t see from being so close to it. Plus being a newbie to this field, I can definitely use guidance from others who have more expertise on more optimal ways to build the app’s functionality.

For the curious, here are screenshots to show you want it looks like: nyghtowl.github.io.

Also, I posted a README file on the GitHub project site to give an overview.

As always more to come.

Week 9 – Crunch Time

It is hard to stop long enough to write this. Sometimes this past week I was too excited to work on my project to sleep. Still I know this is valuable for my sanity to think through what I’ve done so far and appreciate how far I’ve come. Plus, my brain functions better when I take breaks from coding.

Looking at my post from even a week ago surprises me because it feels like longer than 2 weeks since I was figuring out how to use web APIs for example.  At the time it made me feel very lost and the concept seemed extremely foreign. Now whenever I see there’s an API that can be used, it feels like it’s the easiest thing to implement and leverage.

So this week definitely went fast and continued to remind me about “best laid plans”.  I had a couple of key functionality goals like trying to get clickable text on my map. Which I worked off and on all week to get that functionality, and I’m still just cracking the edges of that nut. Still there are cracks so I know it’s a matter of time. Anyway, where I am going with this paragraph is that I had some goals and I chipped at them. When they got too hard or there was a barrier, I shifted focus and tried something else for while before going back. Some things were much easier wins that I went for to feel some level of success, and others I let myself accept were items I can deprioritized for later based on complexity.

Last Sunday, there was a lot of other life stuff going on so there wasn’t a lot of coding. It was a good break. I did take time to define a larger local database of neighborhood data, and I found lots of variations on how San Francisco neighborhoods are defined. There are a lot of opinions in this town.

On Monday, I decided to take a crack at setting up login and create account functionality using Flask Login and WTF (for WTForms integration – not for what some of you are thinking but at times feels applicable). I had heard a number of classmates talking about using these  extensions, and I wanted to practice applying the packages while I had my cohort’s expertise to leverage. At some point, I plan to add functionality where there will be customizable views for logged-in users (e.g. choose the key locations you want to know the weather on like where you live and work). We had a lot of speakers that day so there wasn’t a lot of coding time.

Tuesday, I built out the login and create account pages so they were loading. The code I wrote was pulled from tutorials and code that my classmates had written to implement login into their apps (thanks Marissa & Jennyfer). At Hackbright, we talk a lot about reuse and not reinventing the wheel in programming if you don’t have to (unless you are trying to understand the fundamentals or you want to create your own thing) .

Also, I re-ran the database model setup to seed it with the revised neighborhood dataset from Sunday, and to build out a table to hold user account data. Furthermore, shifted my database management system from SQLite to Postgres. I wanted to practice Postgres (again while I can leverage the collective knowledge of classmates – thanks Lindsay & Dee), and its what you need to use if you deploy to Heroku.

Wednesday, I spent a good chunk of time really trying to understand the login code to see how I could adjust it as well as test it. For example, there are WTF default validators you can use or customize for form submission validation. So you can apply and throw standard errors if a user enters their password incorrectly and customize the errors and say “random error just to annoy you”.

One thing I figured out when reviewing the login and create account code was how to take the repetitive display code, and simplify it to a for-loop. It was satisfying to make it more compact. Lindsay thankfully helped me fix this one issue I was having with the consolidation because there was a second/embedded for-loop to generate the WTF errors. I was having a hard time figuring out the right object reference name to use, but Lindsay was able to identify it with a little testing.

Thursday was a much needed code clean up day. I adjusted file and variable names to take out duplication and make it easier to understand the structure. I shifted code into files that made sense like pulling straight functions out of my views file and into a functions file. I went back and added more comments when I found myself reading code and not remembering what it was for. I also continued to tweak the app design and finally finished small gaps in results like having the neighborhood name populate in the title of the results page.

With the help of my instructors, I also started to restructure the code in preparation for adding an Ajax spinner while the page loads. It involves creating a shell page that shows the spinner until the page with the content has finished loading. This is valuable considering I have several web API calls that can take time to pull the data into the results page. I’ve got a couple more items to look up to finish this functionality.

One of my more satisfying moments was at the end of the day. I tackled including autocomplete (when typing in the search bar it will give suggestions) into my code. I wanted to use my local neighborhood database to fill in autocomplete vs. some of the plug-ins that are out there. After a couple of hours of trial and error, I was able to pull my database results into an object variable with SQLAlchemy and Python and then pass that variable with Flask views to the rendered HTML page where Jinja was able to reference the object variable inside of the script tags.

I ran a Jinja for-loop over the object to generate a list of the neighborhood names and assigned it to a JavaScript variable. That was then referenced in my JS file and utilized by the jQuery code to generate autocomplete in the search bar (also referencing the jQuery code in HTML to activate it). Basically there was a couple different languages I was writing in, lots of data passing between them and referencing of different files. And when it all came down to it, it worked. It was so cool when it worked. It made me feel like I was starting to get the hang of this programming stuff.

As mentioned above, throughout the week I took some time to work on setting up maps with text. I researched examples that were already implemented and tried incorporating the code. On Friday, I went in with the intention of getting text on the page. As usual, I got sidetracked. While trying again to apply the JS Mark With Label library, I found that this error I was having since I setup maps was getting in the way.

Maps loaded when the app initialized, but it was always looking for coordinates which weren’t generated until at least one search ran. So on the first page when no searches had taken place, it would throw an error. I could have just passed it some static coordinates but I wanted to stop it from trying to load. I had put that as a low priority fix since it wasn’t preventing any page loads or the map to load when a search did take place. However, I found that I needed to understand how to fix that error in order to make any additional JavaScript/jQuery progress.

Thankfully Alex, one of the cofounders of Codepen, was on site yesterday and just helping all of us with our projects. He helped continue to further my understanding of JavaScript/jQuery and to resolve the loading error. After that and on my own, I was able to get Mark With Label to work. So there is a label now showing up on the map. Next steps are to revise it and make more than one label for the different neighborhoods and then to make it clickable. The rest of the day I spent time on some easy wins (e.g. adjusting look and feel and applying Bootstap’s JS modal plugin for the login) to offset all the time spent around just getting a label to show.

There is so much still I want to do and I can’t believe its one more week officially left in the program. This next week there won’t be much time for coding because of Career Day, prepping for interviews and general hanging out with the class and soaking in the time we have left together. I still can’t believe its May.

If you read this far, I’m impressed. My last couple posts have really been for me to keep track of progress and status. They have not been as laymen friendly or geared towards mass consumption. Next week, I plan to post more around what Career Day was like and less about project status.

If you are reading this and thinking about getting into programming and have any questions about getting started or wanting more clarity about comments above, feel free to email me. If I don’t know the answer, I know how to find it and/or I know a lot of people who do.


Week 8 – Time Goes Fast When You’re Having Fun (Project Status)

To anyone out there who’s watching, I wrapped up week 8 and I’m thinking about how I want to leverage my time in week 9 in preparation for Career Day. I do talk a lot about that as the D day, and it is to some extent. It’s a goal to help keep focus in terms of how I want to prioritize my time. A second goal is that I only have technically 2 weeks left in the program and there are certain subjects I want to practice while I have class time. Sometimes those goals match in purpose and sometimes I have to just pick based on how I’m feeling that day.

So this past week went fast and I found myself eager to get to coding, but also surprised at all the other “stuff” that kept me from touching code at times.

Last Sunday, I spent the day reading up on the Google Places API with the intent to resolve my search functionality requirements. It definitely expanded search capabilities in terms of what can be entered, and I was able to center the search results with coordinates and a radius around the Bay Area. Thus, typing in golden gate more likely came up as the park vs. showing up in another location like LA or somewhere in China. I also concatenated the word “neighborhood” on any search to help further influence the results. There is a lot more I can do here but this is good enough for now.

Monday I decided to leverage the results from both Weather Underground(WUI) and Forecast.io because they had different data points I wanted to report out. I put together a dictionary of the results that mapped to the data points from the different sources to help keep me clear on what I would use. To note, the temperatures between the two at times varied wildly based on given coordinates. Supposedly WUI is more precise based on all the local inputs they are leveraging but it would be interesting to see how accurate they are based on location and time. Again something to think about down the road.

I also started to work on developing the More Details page on Monday and that hit a bit of a snag on Tuesday because I was trying to figure out how to pass the forecast dictionary to the new page. The approach recommendation I received was to pass it in as a Flask session variable. The problem was that the code I put together for the session was spot on, and after checking it several times myself and with others (esp. instructors – huge thanks to Cynthia on that one), we were flummoxed.

I decided to set that aside for the moment, and start to look more at the front-end of the app. Sometimes its important to step away and get a refreshed perspective to help solve problem. So I spent Tuesday and a good deal of Wednesday understanding Bootstrap and CSS. I’ve always wanted to keep my design simple without a lot of distraction. The goal of the app is a simple question that needs a fast and simple answer. Still I found a lot of adjustments to make to the layout and format that made the pages look and feel cleaner. It was a fun activity to do and the right distraction from the session kerfuffle.

Late in the day on Tuesday, my mentor, Michelle, helped me identify and start work on an alternative for the more details page to bypass the need to use session. Interestingly enough the alternative was what I had wanted to do with the page in the first place. The way it worked was that on the first results page there is a link to click and expand the page with more details. We used Bootstap’s accordion functionality to make this happen. The reason it bypassed the need for using session was that the dictionary had already been passed directly to the main results page when it was created and the more details expansion took place now on the main results page. So there wasn’t a new page being generated.

Wednesday I figured out how to apply a calendar on the search page with jQuery, and continued building the more details results expansion.  I also figured out how to add a Google map onto the more details section which helped make the page really feel like it was coming together.  By the end of the day, session started working even though I didn’t need it at that point. No one knows why (the thought is that it’s probably a cache problem), but it works.

Thursday I spent time optimizing my code and changing the forecast dictionary into an object which helped me practice my understanding on classes and OOP. I also spent a good amount of time testing code and fixing functionality. At the end of the day, I needed something a little more fun; thus, I expanded the results to include night. I had been ignoring it purposefully before since it is a sun finder app. Before I was just popping up a message at night to tell people to go to sleep. So I added in images of different phases of the moon and the moonphase package to help generate the phase and subsequent picture to pop up on the results page.

Friday was more code clean-up and testing. I started to hook up the calendar to the backend functionality and I shifted the date results from timestamp format to datetime format. For anyone who works with dates and times, just be aware that datetime is tricky and can take a little time to understand. I also started working with timepicker to include choosing an hour in the search process, but decided to shelve it for now. There are other features and functions that are more of a priority for now. Also, I still need to finish building out how the results will populate if a date is picked that is not the current day. As I mentioned last week, every time I do something it brings up a number other questions and things to do. It continues to be great practice in finding and keeping focus.

Saturday (yes I worked on this thing every day this past week), I spent the whole afternoon with my awesome mentor, Jeremy, who helped me debug the moon functionality that I added. He also gave me great pointers on keyboard shortcuts and how to utilize pdb when looking at the results from the weather APIs. This allowed me to work with results directly in the terminal and will make it easier to search for the data points that I will want to target in my results page based on the date chosen. We started working on building the functionality for setting up the map to have names of neighborhoods and make it clickable. While working on that we stumbled on the weather overlay on Google Maps which was cool but on further inspection, I realized it just wasn’t detailed enough to use for my purposes.

At the end of the day, he shared a great video, JavaScript: The Good Parts by Doug Crockford. We went through some of the video together and he took time to coach me through the concepts which helped speed up my understanding of the subject. It was definitely very helpful as I work on getting more comfortable with JavaScript.

Something to stress that I mentioned a few times above is that throughout the week I was constantly testing my pages to make sure they were working, and that the results were what I expected. Any time I made changes, I would go back to test to see the impact. So a good percentage of my time was spent in just testing. It’s good practice to have because as you develop, something doesn’t always work the way you expect, and it can be a small error. It’s harder to debug if you’ve been making several changes, and you get an error that is not easy to track down despite all the error reports.

This past week was definitely full and there is plenty still that I want to do. I understand it won’t all get done and I’m ok with that. I’m pretty happy with where the app is at already. Michelle had me show it to some of the alumni this past Thursday night, and I got some great feedback on it. SO I plan to this next week to really try out functionality I want to learn and get practice with. Its going fast and I know this next week will not be any different.

Week 7 – SF Sun Finder Project Progress

Development progress has definitely been enlightening the last week and a half.  I’ve summarized highlights that have taken place to show what I’ve gone through so far and an example path of developing a HB project.

At the end of week 6, I spent Thurs. playing with Balsamiq and mocking up all kinds of ideas. Friday, I spent the day catching up on emails and just getting things in order so I could think straight. Then all I did over the weekend was start setting up my virtual environment.

This past Monday, I started to really build out the first couple Flask views. I also took some time to finally learn about CSS and how to apply it to the HTML pages. Having a couple working pages kept me motivated throughout the week.

On Tues, I built a temporary local database in SQLlite to map a couple SF neighborhood names to central coordinates. I used this for initial test purposes and applied the SQLAlchemy package to interact with my db. It was a nice brief practice in designing databases and utilizing a seed file to populate.

Wed I began learning how to use and integrate the Forecast.io app into my application through requests (vs. urllib). I was able to complete the loop of capturing a user query, pulling coordinates from my local database, using the coordinates to request forecast information and then posting the results on the HTML results page.

So guess that means I’m done…not exactly.

Also on Wed, I finally finished understanding how to keep my API keys secret in my environment and not posting them to Github (hint: .gitignore is your friend as my mentor thankfully showed me). I had worked on that as a side research since Monday with lots of help from my instructors. Last, I searched for really cute stock photos to use for each weather instance so I would enjoy looking at my results.

On Thurs, I started to build out the more details page, tweak existing pages and build out some validation points on the data (e.g. is it day and if percent cloud cover is less than 20% then a sun should show vs. partly cloudy). That was the day that I finally realized one of my biggest challenges which was based on my initial design, I was apparently trying to recreate search. Definitely a fantastic challenge but so much bigger than the scope of what I can handle right now. Fortunately, I also happened to talk to someone that day who had worked for Google and thankfully clued me in on a potential solution with Google Places.

Friday I started to investigate the Google Places API. I also confirmed that Weather Underground (WUI) does have more details and variations between SF neighborhoods. So I started reading up on how to use that API in addition to Forecast.io. Friday was really about research and tweaks since we did field trips most of the day.

This last week, we talked often about minimum viable product  (MVP). Meaning prioritize the development that will give the most basic functionality for demo and testing. Technically I could argue my app is done since it does the most basic of functions, and I’ve already been sharing it with my classmates. Reality is that there is so much still to do. If anything a common challenge is that I am constantly thinking of new features and functionality that I want to add and its great practice on prioritization. So the change out of WUI could wait in comparison to other functionality since Forecast.io has very valid data.

Something to also note is that in addition to all the coding, we had several speaker visitors throughout the week, went for some cool field trips as mentioned and did interview practice every other day. By Friday, my brain was tired. It was a funny feeling because it was not the usual tired feeling and my head was not cottony like the first several weeks. I just knew I was at the no brain state anymore. It happens and is a good sign to take a break.