Features

Sunayu utilizes Spark to process your Big Data

authored by: Kim Crawley

Big Data can easily get too big to handle. It’s estimated that the technological per-capita capacity to store information doubles roughly every three and a half years or so. It was estimated by 2020 that the world’s total collective computer data volume is roughly 44 zettabytes! For perspective, a zettabyte is 1,000,000,000,000,000,000,000 bytes or a billion terabytes. A typical external HDD for consumer usage has one to ten terabytes, and that alone seems like an awful lot of data storage on a personal level. By 2025, it’s estimated that there will be about 163 zettabytes of data in the world if all of our collective data storage is combined. 

Your business network may have petabytes of data, with each petabyte containing a thousand terabytes. That’s still a real challenge to process and analyze. You must make sure you have the correct tools and expertise to manage it. Sunayu has both, and we’ll relieve your business of burden.

Wield the power of Apache Spark

Apache Spark has surged in popularity alongside Hadoop. It’s easy to understand why. Developers can work with its APIs really effectively. And Spark was developed from the ground up to address limitations in cluster computing. 

Implement better machine learning than ever before! Spark’s MLlib library maximizes its distributed memory-based architecture for optimal data analytics efficiency. Sunayu can use Spark to help your business process massive amounts of data through AI like never before. Even if your network’s data lake seems more like a data ocean!

What sort of Big Data processing does your business need? Collaborative filtering, cluster analysis, feature extraction, random data generation, logistic regression, Decision Trees, Random Forests? We’ll help you make it happen.

Data lakes pose unique challenges

It’s easier to collect a data lake because you can leave your data in its own native formats. But massive data analysis needs combined with a lack of metadata can make newbies feel wary. As Gartner’s Anthony White wrote: “The need for increased agility and accessibility for data analysis is the primary driver for data lakes. Nevertheless, while it is certainly true that data lakes can provide value to various parts of the organization, the proposition of enterprise-wide data management has yet to be realized.”

But Internet of Things tech is booming, so data lakes simply cannot be avoided. Sunayu can manage your data lake with confidence, ready to process it through Spark for maximum efficiency and effectiveness.

ACID transactions made possible with the Delta Lake storage layer

Massive databases face huge problems. Anything from electrical issues to incomplete data ingestion, to little bugs that become larger errors that can invalidate your data. Big Data coincides with a much greater risk of little problems exploding exponentially on a large scale. 

So your data transactions must be ACID– assuring atomicity, consistency, isolation, and durability. Sunayu processes ACID transactions through Spark with the Delta Lake storage layer.

Opensource software can be easier to improve over time, with potentially thousands of developers available to tweak the code base here and there. So it’s good to know that Delta Lake has become the opensource standard for interfacing with data lakes. From an October 2019 press release:

“At today’s Spark + AI Summit Europe in Amsterdam, we announced that Delta Lake is becoming a Linux Foundation project. Together with the community, the project aims to establish an open standard for managing large amounts of data in data lakes. The Apache 2.0 software license remains unchanged.

Delta Lake focuses on improving the reliability and scalability of data lakes. Its higher-level abstractions and guarantees, including ACID transactions and time travel, drastically simplify the complexity of real-world data engineering architecture. Since we open-sourced Delta Lake six months ago, we have been humbled by the reception. The project has been deployed at thousands of organizations and processes exabytes of data each month, becoming an indispensable pillar in data and AI architectures.

To further drive adoption and grow the community, we’ve decided to partner with the Linux Foundation to leverage their platform and their extensive experience in fostering influential open source projects, ranging from Linux itself, Jenkins, and Kubernetes.”

Sunayu has the experience and the resources to maximize the Delta Lake storage layer to keep your data lake intact and analyzed!

The Big Data Analytics your business needs

Cloud platforms like AWS and Microsoft Azure has made massive data storage more flexible, scalable, and affordable than ever for organizations in all industries. Chances are your data lake isn’t on premises! Sunayu is ready for the practical present and exciting future of the cloud in a big way to manage your Big Data.

Whether your business needs to mitigate fraud, optimize your supply chains, or otherwise integrate previously unmanagable quantities of data, let the experts make it happen. Because one day your network’s petabytes will turn into exabytes and your Big Data will become Bigger Data. Get ready for the future because the possibilties of machine learning will capture your awe!

Features

How SaltStack can help both your redteam and blueteam

Your organization’s redteam and blueteam must work constantly to keep your networks secure. 

Your redteam must check to see if your security policies, procedures, and configurations work. Not just by running a penetration test, but by running various attack simulation campaigns on an ongoing basis. They may spend weeks pretending to be one type of cyber attacker or another. And when any change is made to your security policies, procedures, and configurations, they must audit, audit, and audit again.

Your blueteam must work constantly to responsively and proactively security harden your networks. Has a vulnerability been discovered by the redteam? Patch, remove or mitigate it. They may work with a security operations center, your redteam, or perhaps a purple team (both offensive and defensive) to find ways in which the security of your networks can improve. They must work in a continuous state of security hardening.

That’s an awful lot of work that must be constantly done! What if many of your blueteam and redteam’s tasks could be automated? Automation, when implemented properly, can free your human beings from doing tedious and unnecessary work, freeing them to focus on tasks that are best done by a living person. Enter SaltStack.

What is SaltStack?

SaltStack is a powerful automation framework that offers tremendous compatibility and flexibility. Its architecture is based on remote execution. Imagine remotely executing commands to hundreds of client machines at once! Or even just to one machine. 

The Salt Master is the interface for executing many modules and commands. Each connected client machine is referred to as a minion. (No minions memes, please.) Network connections between the master and its minions are strongly encrypted to help secure the entire process. If you prefer SSH, there’s even a Salt SSH “agentless” systems management channel you may use.

Developers have created a plethora of different modules that can be used with SaltStack. Anyone with the knowhow may develop their own modules because the application is opensource. Security practitioners who don’t code very much may still use various modules and scripts through SaltStack’s command-line interface to powerful effect.

The main Salt Master application is based on Linux. But there are also applications for Windows, VMware vSphere, and BSD Unix. SaltStack can remotely execute commands to a variety of devices with most major operating system platforms.

How SaltStack can accelerate your blueteam

Lots of businesses in a variety of industries use SaltStack to improve the efficiency and responsiveness of their security controls. One such case study is Liberty Mutual (LMI):

“For their first SaltStack project, The LMI network security team decided to use SaltStack event-driven detection and automation to auto-resolve firewall issues and maintain predefined security policies for over 150 Junos configuration options. By replacing inconsistent bash and shell scripts with unified SaltStack automation, the team eliminated over 100 lines of code per firewall and reduced the time to detect and resolve issues by 90%—from 20 minutes down to 2.”

Another novel blueteam use of SaltStack is from a company in a completely different industry, Sterling Talent Solutions. Stephan Looney, is Sterling Talent Solutions’ IT director.

He says:
“A significant reason for Sterling’s selection of SaltStack Enterprise was to empower night operations and the support desk with a SaltStack self-service portal. Stephan said, “SaltStack is extremely flexible and can be configured to automate just about any job. But this flexibility can be difficult for the less technical, Windows-oriented members of our team. And if the power of SaltStack ends up in the wrong hands, bad things can happen.“SaltStack Enterprise gives systems administrators the ability to create the automation routines and then make them available as a push-button job in the console only to those authorized to do the work. We’ve already done the work on the backend solving for the majority of NOC and support desk tasks. When we customize SaltStack to our needs and make it easily consumed by the appropriate people on the team, good things happen.” 

Are you familiar with Tenable’s vulnerability management applications? SaltStack can now work with them directly.

From February, in a press release about SaltStack Protect:

Andrew Johnson, Payroc’s information security manager, said, ‘SaltStack Protect integrated with Tenable.io substantially simplifies our ability to remediate infrastructure vulnerability at scale. The more we can break down tool and process-imposed silos that exist between our security and operations teams the more confident we become in our ability to truly secure IT. We’re looking forward to more SecOps innovation from the SaltStack team.’

SaltStack Protect 6.2 can now import Tenable.io vulnerability assessment scan results to intelligently automate vulnerability remediation. SaltStack Infrastructure automation integrated with world-class Tenable.io vulnerability management solution helps security and IT teams streamline vulnerability remediation. This integration helps speed security enforcement, reduces threats caused by imperfect infrastructure cyber hygiene, and allows security operations teams to effectively collaborate within an all-in-one, actionable vulnerability management and remediation platform.”

A variety of SaltStack applications, modules, and scripts can be utilized to make the work of your blueteam so much more responsive and powerful. If you can design it, you can automate it!

How SaltStack can accelerate your redteam

A lot of the activities of your redteam can be thought of as what your blueteam does but in reverse. Now that SaltStack Protect can import vulnerability scan results from Tenable.io, your redteam can run vulnerability scans from Tenable and send the results directly to your blueteam. Is SaltStack Protect the real purple team? Maybe so! The functionality is yours to explore.

SaltStack Enterprise can remotely execute pretty much whichever commands you’d like if you want your redteam to be able to perform security audits with greater efficiency and potency. From the Enterprise whitepaper:

“The SaltStack was originally built as an extremely fast and powerful remote execution engine, allowing users to execute commands asynchronously across thousands of remote systems in milliseconds. This remote execution capability allows SaltStack to act as a command and control abstraction layer so IT professionals can execute complex tasks across tens of thousands of diverse and heterogeneous systems with the click of a button. Using SaltStack remote execution, IT tasks that used to require three of your best engineers and a week to complete can now be performed in seconds by anyone on the team.”

Use your imagination, put your mind in the role of a cyber attacker, and see which attack scenarios you can design for your security testing.

Cyxtera uses the power of SaltStack in a way that can maximize the effectiveness of security audits that are targeted to specific compliance standards.

From the case study:

“Once they’ve defined their policies, the team can create target groups of machines across any of their 57 datacenters and scan them to quickly understand their current compliance status. For example, if an insurance customer is utilizing Windows servers in the San Jose data center, the SRE team can use the SaltStack targeting system to target specific machines and scan them against a pre-defined PCI profile. When the scan comes back it not only identifies vulnerabilities but provides the automation action SaltStack will take to remediate. This allows the team to verify and test the action before it is run.”

Depending on the situation, this can be more useful than run-of-the-mill penetration testing.

Can you imagine how massive the systems are that IBM Cloud has to test? SaltStack makes it simple and easy, rather than complex and overwhelming.

From the case study:
“Rather than set the upgrade orchestration sequence loose on a production network, the network team used SaltStack native, event-driven automation capabilities to build careful testing into the upgrade sequence. These tests would run within a controlled environment between each phase of the A/B upgrade. As each test passed, SaltStack software detects the event and deploys the next firmware upgrade automatically. While almost the entire process was performed autonomously, the SaltStack event bus allowed network engineers to monitor the process in real time and intervene if ever a test failed or a sequence timed out.”

As you can see, SaltStack can be just as useful from your redteam as it is for your blueteam. It’s all in the way that you use it!

Conclusion

Whether you’re redteam, blueteam, or even purple team, you really ought to explore how SaltStack can make your everyday tasks so much easier and more powerful.

SaltStack is built with Python, a language that’s well known for its simplicity and for its compatibility with all major platforms including all flavors of Linux, Windows, macOS, and BSD/Unix. An index of all of SaltStack’s modules are here.

Organizations large and small in a wide variety of industries have leveraged SaltStack to help secure their networks. Offer that power to your security teams and you can security harden in a responsive way, ready for the growing cyber threat landscape.

Also Sunayu has the practitioners and expertise to perform blueteaming and redteaming exercises for your business. Sunayu leverages SaltStack and other cutting edge technologies to help improve the security of your network against the latest and most destructive threats. Hiring experts to improve your security is always a worthy investment. Check out Sunayu and see what they can do for your organization!


Author: Kim Crawley

BFT Prime vs. Bitcoin

The rise of global, peer-to-peer cryptocurrency networks inspired by the Bitcoin blockchain has given new relevance to the design and implementation of Byzantine fault-tolerant systems.

In this paper, we will describe the meaning of “Byzantine” faults, present two very different solutions to the problem—Prime and the Bitcoin blockchain—and explore their comparative strengths and weaknesses.

You can find the link to the full paper here: BFT_Prime_vs_Bitcoin

Deploying your create-react-app in docker

Introduction

For our presentation at the Saltconf 2017 we created a react frontend using react-create-app. In order to run this in docker and autoscale it, we built it into a docker image (check out the apps we made on our saltconf2017 git repo).
 
In this tutorial, we will show you how we built a simple app into two docker images.
 

Objectives

  1. Create an image for development running node where we can quickly develop and test our app.
  2. Create a second image running nginx for production.

Prerequisites

For this example we assume:

  • We are running Centos 7.
  • Docker is installed (we use 17.09.0-ce in our example).

Step 1 – Install node for the app

curl -sL https://rpm.nodesource.com/setup_9.x | bash -
sudo yum install -y gcc-c++ make
sudo yum install nodejs -y

Step 2 – Create the app

Follow along with the create-react-app tutorial until you have a working hello-world app.

npm install -g create-react-app
create-react-app hello-world

Step 3 – Create dev dockerfile

Use your favorite text editor to create a file called Dockerfile.dev with the following:

FROM node:9
RUN mkdir /helloworld
WORKDIR /helloworld
COPY hello-world .

RUN npm install --quiet

CMD ["npm", "start"]

EXPOSE 3000

Build with:

[[email protected] ~]$ docker build -t helloworlddev -f Dockerfile.dev .
Sending build context to Docker daemon  184.9MB
Step 1/7 : FROM node:9
9: Pulling from library/node
f49cf87b52c1: Pull complete
7b491c575b06: Pull complete
b313b08bab3b: Pull complete
51d6678c3f0e: Pull complete
da59faba155b: Pull complete
7f84ea62c1fd: Pull complete
1ae6c7e5e8c9: Pull complete
7c07b0a5c6a6: Pull complete
Digest: sha256:a0e9ecaf0519151f308968ab06b001c99753297a6ce1560a69d47e7b1f16926d
Status: Downloaded newer image for node:9
 ---> 3d1823068e39
Step 2/7 : RUN mkdir /helloworld
 ---> Running in a656434f98f2
 ---> a2b2c7b8ff55
Removing intermediate container a656434f98f2
Step 3/7 : WORKDIR /helloworld
 ---> 6428b9b01b5a
Removing intermediate container 400c1a342aa9
Step 4/7 : COPY hello-world .
 ---> cf865e3184f1
Step 5/7 : RUN npm install --quiet
 ---> Running in 95fce6cd0da4
added 115 packages in 11.854s
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})

 ---> b1da4529c2ea
Removing intermediate container 95fce6cd0da4
Step 6/7 : CMD npm start
 ---> Running in 4fd33d06db82
 ---> eb4bcddeb395
Removing intermediate container 4fd33d06db82
Step 7/7 : EXPOSE 3000
 ---> Running in 5199d4ee0b2c
 ---> ee33dc76a35a
Removing intermediate container 5199d4ee0b2c
Successfully built ee33dc76a35a
Successfully tagged helloworlddev:latest

Let’s get this container running with the following:

[[email protected] ~]$ docker run -d --name helloworlddev -p 3000:3000 helloworlddev:latest
80e39ef208db1fbb15b02cbf49118ea72d39cd305cd3d4283b7c8321969fd941

Let’s make sure the container is running:

[[email protected] ~]$ docker ps
CONTAINER ID        IMAGE                  COMMAND             CREATED             STATUS              PORTS                    NAMES
80e39ef208db        helloworlddev:latest   "npm start"         3 seconds ago       Up 2 seconds        0.0.0.0:3000->3000/tcp   helloworlddev

We can also see the size of the container:

[[email protected] ~]$ docker images
REPOSITORY           TAG                 IMAGE ID            CREATED              SIZE
helloworlddev        latest              5db6aba3402f        About a minute ago   810MB
node                 9                   3d1823068e39        3 weeks ago          676MB

Note our container size for helloworlddev is 810MB.
 

Step 4 – Create prod dockerfile

Now let’s create a ‘production’ container. Let’s make a Dockerfile with the following:

FROM node:9 as builder
RUN mkdir /helloworld
WORKDIR /helloworld
COPY hello-world .

RUN npm install --quiet
RUN npm run build

# Copy built app into nginx container
FROM nginx:1.13.5
COPY --from=builder /helloworld/build /usr/share/nginx/html

EXPOSE 80

Build with:

[[email protected] ~]$ docker build -t helloworld -f Dockerfile.prod .
Sending build context to Docker daemon  184.9MB
Step 1/9 : FROM node:9 as builder
 ---> 3d1823068e39
Step 2/9 : RUN mkdir /helloworld
 ---> Using cache
 ---> a2b2c7b8ff55
Step 3/9 : WORKDIR /helloworld
 ---> Using cache
 ---> 6428b9b01b5a
Step 4/9 : COPY hello-world .
 ---> Using cache
 ---> cf865e3184f1
Step 5/9 : RUN npm install --quiet
 ---> Using cache
 ---> b1da4529c2ea
Step 6/9 : RUN npm run build
 ---> Running in f0a0bece48f9

> [email protected] build /helloworld
> react-scripts build

Creating an optimized production build...
Compiled successfully.

File sizes after gzip:

  35.14 KB  build/static/js/main.d66de642.js
  177 B     build/static/css/main.f7a92a2d.css

The project was built assuming it is hosted at the server root.
To override this, specify the homepage in your package.json.
For example, add this to build it for GitHub Pages:

  "homepage" : "http://myname.github.io/myapp",

The build folder is ready to be deployed.
You may serve it with a static server:

  npm install -g serve
  serve -s build

 ---> d4f928fa7ae4
Removing intermediate container f0a0bece48f9
Step 7/9 : FROM nginx:1.13.5
1.13.5: Pulling from library/nginx
bc95e04b23c0: Pull complete
110767c6efff: Pull complete
f081e0c4df75: Pull complete
Digest: sha256:004ac1d5e791e705f12a17c80d7bb1e8f7f01aa7dca7deee6e65a03465392072
Status: Downloaded newer image for nginx:1.13.5
 ---> 1e5ab59102ce
Step 8/9 : COPY --from=builder /helloworld/build /usr/share/nginx/html
 ---> f55ef73c3580
Step 9/9 : EXPOSE 80
 ---> Running in 722d18289b70
 ---> e635eb890b32
Removing intermediate container 722d18289b70
Successfully built e635eb890b32
Successfully tagged helloworld:latest

Let’s start our production container:

docker run -d --name helloworld -p 80:80 helloworld

Note the difference in sizes:

[[email protected] ~]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED              SIZE
helloworld          latest              e635eb890b32        59 seconds ago       109MB
helloworlddev       latest              ee33dc76a35a        5 minutes ago        810MB
node                9                   3d1823068e39        4 days ago           676MB
nginx               1.13.5              1e5ab59102ce        2 months ago         108MB

helloworlddev is 810MB while helloworld is only 109MB!
 

Differences between the Two Containers

Now that the containers are created, we can take a closer look inside:

[[email protected] ~]$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                          NAMES
0c2772795c51        helloworld          "nginx -g 'daemon ..."   2 seconds ago       Up 1 second         80/tcp, 0.0.0.0:80->80/tcp   helloworld
bccd86a60a9c        helloworlddev       "npm start"              44 seconds ago      Up 43 seconds       0.0.0.0:3000->3000/tcp         helloworlddev

We can easily go into the helloworlddev container to edit and test things quickly:

[[email protected] ~]$ docker exec -it helloworlddev /bin/bash
[email protected]:/helloworld# apt-get update; apt-get install nano
[email protected]:/helloworld# nano src/App.js

The helloworld container, however, is difficult to edit and perform tests with:

[[email protected] ~]$ docker exec -it helloworld /bin/bash
[email protected]:/# cat /usr/share/nginx/html/static/js/main.d66de642.js

Conclusion

I hope this helps users of create-react-app see how easy it is to create dev and prod docker images.

How to use salt-ssh

Introduction

Saltstack is well known for its event based master/agent architecture, but you can also use salt agentless by using salt-ssh. At Sunayu we use salt-ssh to quickly update machines that do not have a salt agent running. Learn more about salt by reviewing the salt documentation.

Prerequisites

To complete this tutorial you will need two Centos 7 systems. In our example we use the following two machines:

  • c71 – The host were we will run the salt-ssh commands from
  • c72 – The host we will configure via salt-ssh

Step 1 – Install salt-ssh

While you do not need an agent installed on the system you wish to manage with salt-ssh, you do need to install salt-ssh where you plan to run the commands from. Let’s install salt-ssh using salt’s bootstrap script.

curl -o bootstrap-salt.sh -L https://bootstrap.saltstack.com
sudo sh bootstrap-salt.sh

This will configure yum with saltstacks repo and install the salt minion. Now that we have the salt yum repo we can install salt-ssh

sudo yum -y install salt-ssh

Step 2 – Configure salt-ssh config

Let’s make a directory for all of our salt-ssh files:

mkdir saltssh
cd saltssh

Now let’s make our master configuration file: vi master

log_level: info
root_dir: .
cachedir: cache
ssh_log_file: logs/master
pki_dir: pki
pillar_roots:
  base:
  - pillar
file_roots:
  base:
  - states

Now let’s make the directories we configured above.

mkdir cache logs pki pillar states

Your directory should now look like this:

[[email protected] saltssh]$ ls
cache  logs  master  pillar  pki  states

Step 2 – Create our roster file

A roster file is how we tell salt-ssh which nodes to ssh to. Let’s create ours: vi roster

c72:
  host: c72
  user: centos
  passwd: 'reallygoodpassword'
  sudo: true

Step 3 – Test connectivity

Now that we have our directory configured and roster file setup we can test connectivity to our node!

[[email protected] saltssh]$ salt-ssh -i -c . 'c72' test.ping
c72:
    True

Notes:

  • The -i tells salt-ssh to ignore host keys
  • The -c . tells salt-ssh to only look in our current directory for configuration. This picks up the master config file and uses all of the local directories.

Step 4 – Run a state

Now that we have our node configured with salt-ssh we can run salt states to configure this machine. Let’s add our machine (c71) to its hosts file. First, let’s create a hosts.sls file inside the states directory: vi states/hosts.sls

add c71 to host file:
  host.present:
    - name: c71
    - ip: 172.18.222.5

Your file structure should look like this:

[[email protected] saltssh]$ find .
.
./master
./cache
./logs
./logs/master
./pki
./pki/ssh
./pki/ssh/salt-ssh.rsa
./pki/ssh/salt-ssh.rsa.pub
./pillar
./states
./states/hosts.sls
./roster

Now let’s run the state using salt-ssh!

[[email protected] saltssh]$ salt-ssh -i -c . 'c72' state.apply hosts
c72:
----------
          ID: add c71 to host file
    Function: host.present
        Name: c71
      Result: True
     Comment: Added host c71 (172.18.222.5)
     Started: 22:24:56.403528
    Duration: 1.409 ms
     Changes:
              ----------
              host:
                  c71

Summary for c72
------------
Succeeded: 1 (changed=1)
Failed:    0
------------
Total states run:     1
Total run time:   1.409 ms

You can now go into host c72 and verify that c71 has been added to its /etc/hosts file.

[[email protected] ~]# cat /etc/hosts | grep c71
172.18.222.5            c71

Conclusion

In this tutorial we covered how to setup a self contained salt-ssh directory and run a simple state using salt-ssh. For more detailed use of salt-ssh please check the official docs.

Toward Open Source Intrusion Tolerant SCADA

SCADA (Supervisory Control And Data Acquisition) systems are in charge of much of the world’s critical infrastructure. They have traditionally existed on private networks and, thus, security has often been an afterthought in their designs. Exploits against these systems have the potential to cause large scale disasters, particularly as they begin to move to the open Internet.

At Sunayu, our experts have extensive backgrounds in distributed systems, including contributions to ongoing work focused on applying bleeding-edge research towards an open source, intrusion tolerant SCADA solution.

Read More at: http://www.dsn.jhu.edu/courses/cs667-2015/SCADA/

Server Side Api Calls Wrapped in Ruby Classes

There’s a lot of information out there on the web packaged in API’s. Ridiculous amounts. Using rails, we can access a large portion of that information by making server side API calls. Also, we can encapsulate objects into ruby classes to provide an interface for that data in our views.

In this blog post, we’ll be utilizing the weather underground api to create a simple app that gets current weather and temperature when you enter in a city and state. You can fork an example of this app, Quick Weather.ly. A general understanding of ruby classes and rails will be needed to follow this blog.

Let start by creating a brand new rails application:

$ rails new weather

We’ll also add some dependencies we’ll need in our Gemfile:

gem 'httparty'
gem 'figaro'

The httparty gem will allow us to make HTTP requests on our server.

The figaro gem allows us to hide our tokens and keys from version control but still allow us to use them in our application. We don’t want our keys public facing!

Let’s install dependencies and install figaro as well.

$ bundle install
$ bundle exec figaro install

You’ll notice here that config/application.yml file was created here and also added to the .gitignore file. This file is where we will store our API key for the weather underground api and any future keys/tokens we’ll need for our application.

Let’s create a simple route in our config/routes.rb :

Rails.application.routes.draw do
  root 'weather#get_weather'
end

Our site will be simple and have only one path, the root url, that gets the current weather of a city.

Since we have a route that maps to a weather controller, we need to make that now as well. Create the controller file app/controllers/weather_controller.rb and place in the following contents.

class WeatherController < ApplicationController
  def get_weather
    if params[:city] && params[:state]
      @forecast = Forecast.new(params[:city], params[:state])
    else
      @forecast = Forecast.new("washington", "dc")
    end
  end
end

Ok. Wait a minute. What the heck is this Forecast class? I haven’t defined that yet.

At a high level though, it seems like we might be getting some parameter values to create a new one, and if we don’t, it’ll just default to making one for Washington, DC.

So we need to define the Forecast model… We’ll be utilizing the httparty gem to make requests to an API inside of this class definition. Before we define the class, let’s take a quick detour and retrieve a key for the API we’ll be using. Visit weather underground api and click Sign Up for FREE!

After signing up and validating email, agree to terms and sign in. Once there, click on pricing(don’t worry its free!). Make sure you click on the stratus plan(the free one, but still great!) and then click on purchase key. Fill out the form and purchase a key. If you then click on the documentation tab you’ll see a url somewhere in the middle of the page like this:

http://api.wunderground.com/api/<your key here>/conditions/q/CA/San_Francisco.json

If we visit that link we’ll see something like this:

If we look at this JSON object. We can see a whole bunch of useful information we may want for our application like temp_f and weather. You can also change the url with different cities and states and see all the information change. We want to encapsulate parts of this data into a ruby object.

Let’s define that Forecast class now. Let’s begin by creating a model file for forecasts. $ touch app/models/forecast.rb in that file, place the following code:

class Forecast
  # creates getter methods for temp_f, weather, city and state.
  attr_reader :temp_f, :weather, :city, :state

  # initialize method takes 2 arguments city and state
  def initialize(city, state)

    # create the url using the city and state arguments. Also utilizing ENV
    # variable provided by figaro. Key value should be in 'config/application.yml'
    url = "http://api.wunderground.com/api/#{ENV["wunderground_api_key"]}/conditions/q/#{state.gsub(/\s/, "_")}/#{city.gsub(/\s/, "_")}.json"

    # utilizing httparty gem to make get request to the url prescribed in the
    # line above and storing the response into the variable below.
    response = HTTParty.get(url)

    # instantiating temp_f and weather by parsing through the JSON response
    @temp_f = response["current_observation"]["temp_f"]
    @weather = response["current_observation"]["weather"]

    # storing arguments as instance varibles in the model
    @city = city
    @state = state
  end
end

One thing we can notice right away is that this class does not inherit from any other class. Another thing to note is that the url string is very similar if not identical to the one we entered into the browser.

Currently our model won’t work because we haven’t defined ENV["wunderground_api_key"]. We need to make sure we update our config/application.yml file with this information:

wunderground_api_key: your_key_info_goes_here

You can find your key by clicking on Key Settings in the weather underground API site.

Assuming you have a working key, we can now hop into the rails console and test our model out. We can see something like this if we instantiate a new forecast and pass in washington and dc as arguments:


This is great. Now the information in our app/controllers/weather_controller.rb makes a little bit more sense:

class WeatherController < ApplicationController
  def get_weather
    if params[:city] && params[:state]
      @forecast = Forecast.new(params[:city], params[:state])
    else
      @forecast = Forecast.new("washington", "dc")
    end
  end
end

If there are parameters for a city and state, it will create a new forecast based on that city and state. If there isn’t parameter values being passed in, then it will create a default forecast of Washington, DC.

Before we test this route out, let’s actually create the view that will have the form for city and state.

$ mkdir app/views/weather
$ touch app/views/weather/get_weather.html.erb

Great, let’s put the form and some displays in there. In app/views/weather/get_weather.html.erb:

<form method="get" action="/">
  <label>City:</label>
  <input name="city" type="text">
  <label>State:</label>
  <input name="state" type="text">
  <input type="submit">
</form>


<p>Temperature in <%= "#{@forecast.city}, #{@forecast.state}" %> is <%= @forecast.temp_f %></p>
<p>Current Weather in <%= "#{@forecast.city}, #{@forecast.state}" %> is <%= @forecast.weather %></p>

Great lets fire up the rails server and test our application $ rails s and navigate to http://localhost:3000. We’ll see something like this:


If we enter Miami for the city and Fl for the state we’ll see this sort of result:

You’re results may very as this API is updated relatively frequently. Whats cool is that this data is a real time reflection of the API when the request was made. Every time we click on Submit we get that information from the API.

This is a very small example of how we can leverage API and ruby classes to encapsulate data from JSON responses. If you have a JSON endpoint that has data you want to access, then you can create a ruby class to encapsulate that data into objects. Limitless possibilities.

Theory to Reality: Quantum Computing

I have been following the development of Quantum Computing for several years now. Anyone who is versed in the field understands there are still huge physics challenges that have plagued the development and advancement of these systems.

One of the most common issue is the conditions needed to produce Quantum entanglement. This spooky state of particle interaction has required temperatures below the thresholds of the vacuum of space in order to function. While producing these temperatures is a challenge, companies like Dwave have managed to make a functional multi qubit systems complete with highly dangerous chemicals. The costs and complexity of these systems however, has hampered the exposure to mainstream computer scientists throughout the world. In very exciting news this month a few researchers announced that they have been able to complete the first steps of producing quantum entanglement using silicon based qubits.

This discovery once thoroughly vetted could allow quantum computers to make it into the hands of many and not just the research community or wealthy firms. This discovery could quickly have a massive impact on cryptanalysis. I look forward to more Quantum advancements in 2017. The most recent NIST document listed below should be very telling of where we are headed.

NIST Presentation on Quantum impact to Crypto.

Quantum Computing in 2017