Category Archives: Zipfian

Zipfian Academy experiences

Half-way Mark – Numb Brain

At the pace we were going, the class showed signs of weariness in week 5 and officially hit the wall this past week. The instructors eased up on us in exercises this week, and gave us a pseudo free day to recharge yesterday.

The week was about assessing where we were and covering MapReduce, Big Data, Flask and to think about projects. It may not sound like they eased up on us but they did.


They gave us a practice exercise the Friday of week 5 to do individually. The data was from a company’s click rate on an advertisement based on user and location and our goal was to recommend locations to target for future advertisements. The data was in a pretty messy state across multiple tables so not surprising it took us several hours just to clean and load which was pretty frustrating but very real world. The rest of the time we analyzed the data and applied models to come up with recommendations.

On Monday, we were given an hour exam which was about solving small sample coding problems that covered several topics we’ve gone over so far. After both assessments, we met with the instructors to go over how things were going  and determine where we should focus our studies for the remainder of the course. The assessments were tough to go through but they did help give us an understanding of how we were progressing.

MapReduce, Hadoop & EMR

MapReduce definitely seems so simple at first blush and yet can be devilishly difficult. This technique is really for handling large amounts of information which makes it a valuable tool for Big Data. You apply some type of change and/or combine data across large datasets and then reduce (consolidate) down the data for the results. I’ve been trying to think of a simple example to explain this concept and finding one is challenging to do simply and quickly. But what the heck, here goes…

Consider a dataset that has 1M rows and there were only two columns that had id numbers in them. There can be multiple occurrences of the same id in either column and the each row represents connections like followers on Twitter. You would use a map function to officially group each row of ids and pass them one at a time to the reduce function. The reduce function condenses multiple occurrences of the same id on the left side of the group and makes it a key. Then you can have the function condense down all the values that would have been on the right side of that key id and make those ids a list of values associated to the key id.

Example Map List:

  • A B
  • A C
  • A Z
  • A W

Reduce Result:

  • A: [B, C, Z, W]

This is a really simplified example and not only can you make the functions more complex in processing results, you can run the data through multiple MapReduce functions in a stream to further adjust the data. MapReduce would be used in a case like generating Twitter’s recommended people you should follow. It’s a lot of data to go through for a result that is calculated regularly and needs to be produced quickly.

MapReduce is an optimized model for Big Data and Hadoop is the framework of choice to run the model on because of its ability to handle processing large datasets. In class, we used MrJobs, a Python library, to write MapReduce programs, and we also worked with Hive which is a data warehouse that sits on top of Hadoop and enables querying and running analysis with SQL. There are many other tools like Pig that we could have practiced using but what we covered still hit the core concepts.

Additionally, we learned how to setup an Amazon EC2 instance which is a virtual computer that you can use to run programs. Its great if you want to train models that can take several hours or longer to run (especially if you want to do several at once). It will free up your local computer for shorter term activities. More specifically regarding MapReduce, we learned how to use Amazon EMR (Elastic MapReduce), which allows you to spin up a remote Hadoop cluster to run these types of jobs. You can even do distributed computing to share the work load across multiple virtual computers on Amazon, but it can cost money depending on your processing needs.


We spent a day in class learning Flask (Python web framework for anyone who hasn’t read this blog before). There are students who plan to build data products and typically those are distributed online.

To clarify what a data product is, Google is an example. There is an interface where based on the user interaction, stuff is done on the backend to provide data in some type of format for you. Zivi, one of the women in my Hackbright class, created Flattest Route which is a great example of a data product. Users input where they are and are going to and the site generate and produce the flattest route to get between those points.

So we went over Flask because it’s a simpler framework to pick up if you are putting something online. It can still be a bit tough for only looking at it in a day if you don’t have experience with frameworks, but the instructors plan to give support as needed through our projects.


During our pseudo free day, we researched our projects because we will be submitting top project ideas on Monday morning. The rest of next week we will work on fleshing out our project ideas while also learning more complex machine learning algorithms (e.g. Random Forest).

Even though we were all worn out and tried to take it a little easy, it was still an interesting and busy week regarding content.


Deep Learning Surface

Deep Learning is a tool in the Machine Learning (ML) toolbelt which is a tool in AI and Data Science toolbets. Think of it as an algorithm subset of a larger picture of algorithms and it’s area of expertise is solving some of the more complex problems out there like natural language processing (NLP), computer vision and automatic speech recognition (ASR). Like when you talk to the customer service computer voice on the phone vs. push a button.

Why am I writing about this?  Because its was the topic of my tech talk at Zipfian this week. I chose Deep Learning because I have an interest in making technology smarter, and I was clued into this area of ML as being more advanced in getting computers to act in a more intelligent and human way.

My research was only able to skim the surface because it is an involved topic that would take some time to study above and beyond what Zipfian is covering. Below is a summary of some key points I covered in my talk and additional insights. Also, the presentation slides are at this link.

Deep Learning in a nutshell:

  • Learning algorithms that model high level abstraction
  • Neural networks with many layers are the main structures
  • Term coined in 2006 when Geoff Hinton proved neural net impact


To his credit, Hinton and others have been working on research in this field since the ’80s despite lack of interest and minimal funding. It’s been a hard road for them that has finally started to pay off. AI and neural networks in general have actually been explored since the ’50s, but the biggest problems in that space in general has been computer speed and power. It really wasn’t until the last decade that significant progress and impact has been seen. For example, Google has a project called Brain that can search for specific subjects in videos (like cat images).

I mention Hinton because he’s seen as a central driver of Deep Learning and many look to him to see what’s next. He also organized the Neural Computation and Adaptive Perception (NCAP) group in 2004 that is invite only with some of top researchers and talent in the field. The goal was to help move Deep Learning research forward faster. Actually, many of those NCAP members have been hired by some of the top companies out there diving deep into the research in the last few years. For example:

  • Hinton and Andrew Ng at Google
  • Yann LeCun at Facebook
  • Terrance Sejnowski at US BRAIN Initiative

Its a field that technically has been around for a while but is really taking off with what technology is capable now.


Regarding the structure, neural networks are complex and originally modeled after the brain. They are highly connected nodes (processing elements) that process inputs based on a statistical, adaptive weights. Basically you pass in some chaotic set of inputs (it has lost of noise) and the neural net puts it together as an output. Its assembling a puzzle with all the pieces you feed it.

Below is a diagram of a neural net from a presentation Hinton posted.

The overall goal of neural networks are feature engineering. Its about defining the key attributes/characteristics of the pieces that make up the puzzle you are constructing, and determining how to weight and use them to drive out the overall result. For example, a feature could be a flat edge and you would weight nodes (apply rules) to place those pieces as a boundary of the puzzle. The nodes would have some idea of how to pick up pieces and put them down to create the puzzle.

In order to define weights for nodes, the neural net model is pre-trained on how to put the puzzle together, and the pre-training is driven by an objective function. Objective functions are a mathematical optimization technique to help select the best element from some sort of available alternatives. The function changes depending on the goals of the network. For example, you will have a different set of objectives for automatic speech recognition if you have an audience in the US vs. Australia. So your objectives will take those differences into account to help adjust node weights through each training example and improve upon the output.

A couple other concepts regarding neural nets and Deep Learning are feedfoward and backpropagation (backward propagation of errors). Feedforwad structure passes input through a single layer of nodes where there is an independence on the inputs and unsupervised learning. So nodes can’t see what each other is holding in regards to pieces and can only use their pre-trained weights to help adjust / put the pieces in a place they think is best for the output. Restricted Boltzmann Machine and Denoising Autoencoders are examples of feedforward structures.

Backpropagation is multi-layered / stacked structures that are supervised learning. It tweaks all weights in the neural network based on outputs and defined labels for the data. Backprop can look at the output of the nodes at different points in the process of constructing the final picture (see how the pieces are starting to fit together). If the picture seems to have errors / pieces not coming together then it can adjust weights in the nodes throughout the network to improve results. Gradient descent is another optimization technique that is regularly used as an alternative to backprop. Example backprop neural networks include Deep Belief and Convolutional Neural Networks (regularly used in video processing).

Last Thoughts

So I for one would love to see a Data from Next Gen or a Sarah from her but neural nets are a far off step to create that level of “smart tech”.  Plus, as mentioned above they are one tool in the bigger picture of AI. They are a very cool tool and definitely beating out other algorithms in regards to complexity of problem solving. They are fantastic at classification, prediction, pattern recognition and optimization but they are weak in areas like covering logical inferences, integrating abstract knowledge (‘sibling’ or ‘identical to’) and making sense of stories.

On the whole Deep Learning is a fascinating space for the problems it can handle and is continuing to solve. It will be interesting to see what problems it solves next (esp. with such big names putting research dollars behind it). Below are references that I used to put together this overview and there is plenty more material on the web for additional information.


Below are references I used while researching the topic. Its not exhaustive list but it is a good start.

Side Note on Zipfian

On the whole it was another hectic week. In a very short note, we covered graph theory, NetworkX, k-means algorithm, and clustering overall. There was a lot more detail to all of that but I’ve considering my coverage above, I’m leaving the insight at that for this week.

One third of the way..

I’m keeping this a bit brief because there is a lot to do. Good week and busy as usual. Lots of Naive Bayes. Best part was my team won the simulated Kaggle competition about Stumbleupon.

Below is a summarization of concepts and tools covered this week.

  • Web Scraping
  • ReST APIs
  • Tolkenization
  • Natural Language Processing (NLP)
  • Vectorization (Count, Tf-idf)
  • Ngrams
  • Feature Engineering
  • Classification – Naive Bayes (Mulitnomial, Bernoulli, Gaussian)
  • Confusion Matrix
  • ROC Plots & Area Under the Curve
  • Deep Learning

New Tools:

  • MongoDB
  • SQL
  • SQLite3
  • Regex
  • Beautiful Soup
  • NLTK

It really struck me this week what makes Zipfian different from Hackbright beyond the focus of the programs.

In my post last week, I summarized the format which seems similar to Hackbright. Still I don’t think I really got across nor appreciated the difference which is the pace (e.g. work load outside of class and amount of topics covered in even just a day).

Hackbright had exercises but we actually did some tutorials in class on key tools we would use. We weren’t able to deep dive very much because of time constraints. Still we would take time to learn the main tools we needed for web dev.  And we really weren’t asked to study too much outside of class (even though many of us did anyway) because we were covering enough in school. Note, this may have changed some since I went.

At Hackbright, we spent 1 1/2 days doing a SQL tutorial. In Zipfian, we spent 1 1/2 days using SQL as part of a web scraping exercise, and we were asked to do the tutorials for it outside of class. Plus, we were learning several of the concepts and corresponding tools I mentioned above at the same time. MongoDB is a great example where we didn’t talk about it in any lectures, but it was mentioned in an exercise as a tool we should use and we had to learn it on the fly as we worked if we didn’t get to the tutorial on our own.

The program is not about hand holding you through a tutorial to learn how to use a package. Hackbright really isn’t either, but the expectations at Zipfian are definitely higher that you are able to ramp up quickly on multiple things at once. It’s setting the stage to expose us to as much as possible so we have a sense of the broad picture and become independent enough to seek out how to find support and solutions. They want you to do tutorials and readings mostly on your own and come to class ready to apply as much as possible. Granted finding time outside of class is a bit of a challenge, but I get the value of using the classroom for focused application as well as to make us savvy about quickly picking up new tools. And the classroom is still a place to ask for support if concepts don’t make sense to you on how to even apply them. The teachers and the students have all been extremely valuable resources in this process. This environment is why we are able to learn as much as we are in such a short amount of time.

Its not a huge ah-ha above, but something I saw worth mentioning. Now back to studying.

Machine Learning Starts with Linear Regression

We wrapped up our statistic deep dive on Monday with an exercise around Multi-Armed Bandit  (MAB) and focused the rest of the week on regression.

Main Topics Covered in Class:

  • Multi-Armed Bandit
  • Linear Regression
  • Gradient Descent
  • Cross Validation
  • Final Project Overview
  • Kaggle Competition

We also added in using the scikit learn data package which is primarily used for machine learning algorithms.

MAB / More Stats

MAB was actually interesting to learn about because its goal is to address some of the shortfalls in AB testing. For example, AB testing only compares two options at once and there is potential for bias when showing an old version against a new version. MAB allows testing multiple options at the same time while generating and updating performance scores. There are a couple different algorithm variations in MAB, but it basically is about showing the best performing option most of the time (ex. 90%) and providing some amount of randomization to show a lower performing option to give other options the opportunity to increase in performance (e.g. popularity). How often you randomly show an option can impact how long it takes the performance to change.  The MAB algorithms typically beat out AB in picking the best option to use with the lowest error. This article gives some insight into MAB but beware that the code in the article is a little wonky.

The main take-away from this week is that stats talks a lot about what came before and modeling what the conditions were so you can understand things like best performers based on the past. Whereas machine learning is all about predicting what is to come. When we closed out Mon. in class, they said, “we are done with stats and now we are starting .. well stats (that made me laugh), but this time with machine learning perspective”.

Machine Learning

Apparently linear regression (y=mx+b) is one of the simplest approaches (and most widely known) algorithms used in machine learning and thus, a good place to start. So yeah it is about fitting a line to known data to create a model that predicts your dependent variable (typically called y which could represent something like a price of a house) and figuring out how to minimize residuals (~ errors) and/or reduce cost function (= sum of squared errors)  to improve the line fit. There are a couple different approaches to generate the model accounting for cases such as too many variables and not enough actual data and/or how to account for extreme outliers.

Part of creating the prediction is determining which features/variables to use and there are ways to assess the multicollinearity (finding redundant features so you can simplify the model) and heteroscedasticity (when there are sub-populations in features like age and income). And yeah, good luck with saying that word. We also discussed an alternative to linear regression especially when there is a large number of features. Its a faster way of finding the optimal model with so many variables. Andrew Ng provides some of the best materials to explain this concept and I’m going to reference this further below.

Additionally, we learned about cross-validation and defining test and training sets to work with. Usually you want to set aside 20 – 30% of the data for testing and build a model with your training data. There are different approaches on how to test such as K-fold and leave-one-out. Wiki provides a good description for cross-validation.

Final Projects

Midway through the week, we talked about final projects and about how to approach coming up with an idea and planning. They grouped potential projects into data analysis vs. data product and stressed that we should focus on answering a question first before thinking about techniques. We will only have 2 weeks to do the project and we have to come up with a proposal to get an approval before technically starting. Mainly this is to get us to plan ahead so we optimize our time.

I’m starting to think on an interest I’ve had for a while which is around AI. I want to do something that get’s my computer to predict and solve a problem for me before I know I have the problem. I’ve heard Android is already doing something along these lines, and I know there are a lot of commercial solutions already that can do much more than I can accomplish in a couple of weeks. Still its a challenge I’m interested in tackling to learn more about the space as well as because I want to find ways to make computers smarter. So definitely working through what this will look like.

Simulated Data Science Competition

Last note about the week’s activities is that we competed in a simulated Kaggle competition. I’ve got a link above to the Kaggle site but they primarily provide a contest space for data science challenges. Many companies post projects and awards for the best solution. We took an old contest and ran through an exercise of solving the problem It was great to jump into the deep end and start thinking about how to apply all that we had learned as well as learn how to work in a team to solve this type of problem. It was a stressful but fantastic exercise that reminds me of hackathons and the plan is to have us do this weekly.

Last Thoughts & Key Tip:

I definitely feel like I’m drinking from a firehose. I was a little freaked about it last week, but getting more comfortable with the deluge of information. Our days include a couple of lectures that cover relevant topics but most of the time is spent on exercises where we try to learn concepts while also applying them. We have readings that correspond to every class and usually they don’t spend a lot of time teaching the concepts. You are expected to do a lot of research and study in and out of school. The classroom is very focused on application.

In addition, there are a ton of terms and symbols that are used to explain all these concepts that sometimes mean the same thing or slightly different things and our instructors are not shy from using all the terms and giving content in very abstract form that is at an advance level (as well as giving more concrete examples when asked). And when we are not learning concepts and applying them, we are doing additional side projects to learn techniques needed to be a well rounded data scientist and ready for working in the industry. I’m sharing this to help set expectations that this class is true to the classification of a bootcamp. They don’t make it impossible but they do make you work for it. You just have to decide how hard you want to work for it.

And for the tip, definitely check out Andrew Ng’s Machine Learning videos on Coursera. He does a fantastic job explaining many concepts we cover.

Side Note:

A fellow HB alum and amazing coder, Aimee, has been kind enough to mention me on her blog a couple of times and I wanted to return the favor. She writes some great stuff about coding and data and I definitely recommend checking out her site Aimee Codes.

Oh Math

This week was all about statistics and learning more python packages. It was a tough week and we covered the topic that intimidated me the most, the math.

I actually grew up loving math and I know I can understand it with enough focus and time spent studying. Still it was a lot of grad level content that we pretty much squeezed into a few days. There is no way to fully learn all the concepts in a week and that is a common theme throughout this class (and probably most bootcamps). Additionally, several people in the class have PhDs in STEM (science, tech, engineering & math) and understand the math at a whole other level. It is definitely helpful to have students to learn from while also  making it hard to keep up in the exercises at times.

I suspect many out there who have thought about data science decided against it because of the math (if not the programming), and I can vouch for the fact that you will be looking at Greek letters literally and reading somewhat dense materials on statistical concepts. I know I’m not making this sound any better. but seriously, if you are already coding or thinking of taking on coding, you can take on the math.

I’m not an expert in it yet, but after this week, I can already pseudo read those Greek equations that wiki loves to use in math model examples, and I actually understand why we want to use distributions (to help define unknown and random variables). It’s hard and it was a week of massive frustration (head banging against a literal brick wall – they have them in our classroom). Still sometimes that’s what you got to go through to get started and there were break throughs this week.

If you do decide to take on Zipfian and/or pursue data science in any shape, I cannot say this enough that you should totally start studying stats and linear algebra as well as sprinkle in a little calc. A couple of resources we are using are:

When I get to concepts I don’t understand in some of the materials we are reading, I switch over to Khan Academy videos and if I’m still struggling then I search for explanations that put it in a form that works for me or talk to someone in class. Despite the prolific online resources, having a classroom environment like this can’t be beat in regards to enabling speed of learning.

Key Stats Concepts Covered:

  • Uniform Distributions
  • Bernoulli & Binomial Distribution
  • Poisson Distribution
  • Exponential Distribution
  • Beta & Gamma Distribution
  • Normal Distribution
  • T Distribution
  • Sampling Techniques
  • Hypothesis Testing & Confidence Intervals
  • Kolmogorov-Smirnoff Test
  • Frequentist A/B Testing
  • Bayesian A/B Testing
  • Markov Chain Monte Carlo Algorithm

Key Python Packages/Tools Covered (New & Reviewed):

  • Numpy – good for matrices
  • Matplotlib – data visualization
  • SciPy – statistic functions
  • Pandas – data structure/storage & analysis
  • PyMC – MCMC (Markov Chain & Monte Carlo) functions

Shout out to Giovanna for helping me interpret the proof on the computational Beta version for Bayesian A/B testing and Linda for the very relevant cartoon today. Next week is all about machine learning.

Zipfian First Week Rundown

First week of Zipfian is already done and it does remind me how during Hackbright it felt like it went so fast. The focus for the week was about exposing us to core tools we will use as well as the main activities/processes around working with data.

Week Summary

The main tools used this week were Python, iPython, Git and Bash, and we went through three different exercises where we were gathering, cleaning, exploring and sometimes reporting data. A large part of our exercises throughout the program will be done in Python and we spent 4 of the 5 days using it. This is a bit of a shift for the school because they split more time with R in the last session, and it has to do with the growing popularity of using Python for data science. There’s a great article I read recently on the subject at R-bloggers. We will still use R but the emphasis is more Python.

We also used git and Github throughout the week to handle revision control and this will be daily for the whole program. Zipfian does keep their content in private repositories, but where possible, I will try to share some of the projects on my Github. A number of the resources we are using are public and several of them are referenced on a great open source Github repository by clarecorthell to provide a free approach to getting into data science (Open-Source Data Science Master Curriculum).

Another tool we started using this week which I’m really getting addicted to is iPython. It has nothing to do with Apple, but it is a very helpful kernal for practicing Python on the fly and its notebook (browser GUI) is user friendly when trying to test functions and bits of code in isolation. Some resources to help you get started using iPython outside of its regular site are tips site and an advance tips.

Daily Rundown

As mentioned in the last post, the first day was spent practicing git and how we will use it throughout the course as well as running through a few practice Python exercises. We did a problem where we coded the function to compute the frequentist approach to statistical inference and the Bayesian approach. Spoiler alert for those who haven’t seen the term frequentist before, it’s basically the fraction of the number of times something happens to total times it could happen (e.g. 4/5 days spent using Python).

The second day we wrote bash scripts all day working with a massive data file that we learned how to parse and clean and parse further into smaller files and then strip out specific bits of info to create url links that we then pulled data from. It was a great exercise in exploring what you can do just with bash as well as getting started in the experience of pulling and exploring data.

Wed. and part of Thurs. we took what we did in bash mostly and repeated it with Python.  We spent the rest of Thurs. and Fri. building a recommender. It was the Netflix exercise where you have a set of data for user movie reviews and you want to recommend new movies to that user based on her/his past preferences. Funny enough I had spent week 5 and part of 6 in Hackbright building the actual web framework for the Netflix exercise and we were given the Pearson equation to apply for the recommender (which had similar results). Here we were actually building the recommender itself and leaving out the framework.

We used the Euclidean distance formula on existing product ratings to create a similarity matrix of products to products based on all user ratings. We learned how to use NumPy to create and manipulate matrices, and then we normalized the data to obtain the weighted ratings on products based on that user’s specific tastes. Finally, we outputted the top rated recommendations for the user. During the exercise, we also applied Matplotlib to visualize the data and help us test whether it looked directionally accurate for what we expected.


I was reminded this week that one of the hardest parts with bootcamps is having the stamina to get through it. Sitting and learning for 9 to 12 hours straight (breaking for lunch of course) for at least 5 days in a row can wear you out alone. And doing that while talking and working with another person almost the whole time can be just as exhausting (esp. for introverts which many who do this tend to be). It’s like a marathon and its your head usually that will get in the way of sustaining. It’s also like a marathon in regards to having to pace yourself. I felt it end of day Thurs when my head was just full and didn’t want to brain anymore and all I was good for at that point was sleeping. That of course came after pushing myself to keep reading and coding late on Mon, Tues and Wed, and I’m not the only one because most of the class typically stays late each day.

Coming Up & Tips

In the next couple weeks we will do a stats deep dive as well as machine learning. We will explore data analysis and machine learning packages like Pandas, NumPy, SciPy, Scikit-Learn as well as visualization tools like D3 and MatPlotLib.

On the whole I really did enjoy the week and it helped me appreciate how much I have learned Python since last year. I will say that if you want to do this program, definitely work on practicing Python with online tutorials like Learn Python the Hard Way and Code Academy as well as practice coding your own projects. And definitely start studying linear algebra and stats.

Try: Data Science Except: Monty’s Bayes Example

Great first day at Zipfian. Definitely a different experience starting from Hackbright but some similarities. Granted there are the obvious differences of the content focus on data science vs. web application development as well as 20% women in the class vs. 100%. Plus I’m not the oldest or the youngest of the group. We have a really nice mix of people from various parts of the country and a myriad of backgrounds. Though there is a lot PhDs and/or engineering backgrounds. It was a much quieter energy to the start of the class even though you could tell there was some nervousness.

I know there is no way I could do this program if I was where I was at last Feb in my software experience. Out the gate today we were working on forking, cloning, branching and running pull requests through Github. We were also learning how to use sha’s to move in and out of previous commits (esp. ones you no longer wanted). It took me a couple weeks to even understand what Github was when I started Hackbright, and I stuck pretty close to add and commit for the longest time. And for the actual exercises we were doing today, we were coding with list comprehensions, try / except and lambda’s which I was still learning how to apply those python concepts after I graduated Hackbright. So it was definitely hit the ground running.

We are also doing pair programming for 5 weeks which does make me groan a little even though I do understand and see the value. I did have a really great experience my first day out pairing again. My partner was coding in C prior to class and helped me understand some just great best practices in programming fundamentals. While my knowledge of Python was a little stronger, and I was able to help guide us in the direction on how to code our ideas for solutions.

Patience and communication are still key skills that make pairing successful. I want to expand on this to say that its also really important to make sure not to discount someone’s capabilities if s/he lacks knowledge in certain subjects out the gate. You will be amazed at what you can learn from someone who is also learning if you are receptive and respectful. Just don’t write off everything that person has to say. On the flip side of that, don’t shut down if you are uncertain about concepts initially and push on with questions because this is the space to learn and make mistakes.

We ended the day working through the Monty Hall Bayes example. Talk about a bit of a brain teaser. There were a number of us crowded around the whiteboard talking through it. It took a little time, but we got there and it was actually really cool to see us all working together to try to get clear on the concepts. This is definitely going to go fast and it will be as intense as I expected if not more so.