BettaClowd

Building a Better Agile Cloud with programmability and other cool networking things and cloud stuff!!!!!!!!! Please see archive for past demo videos.

Thursday, May 4, 2017

This post includes Part (1 and 2) combined

Part (1): Hortonworks Hadoop on Baremetal vs Metacloud Openstack
Video Overview: (If not interested in an overview/comparison, scroll down to Part (2) Demo)


Hortonworks Background:
The Apache Ambari project is aimed at making Hadoop management simpler by developing software for provisioning, managing, and monitoring Apache Hadoop clusters. Ambari provides an intuitive, easy-to-use Hadoop management web UI backed by its RESTful APIs.
The Hortonworks Data Platform (HDP): -ready open-source Apache Hadoop framework that completely addresses the needs of data-at-rest processing, powers real-time customer applications and accelerates decision-making and innovation.
Hortonworks Dataflow (HDF): accelerates deployment of big data infrastructure and enables real-time analysis via an intuitive graphical user interface. HDF simplifies, streamlines and secures real-time data collection from distributed, heterogeneous data sources and provides a coding-free, off-the-shelf UI for on-time big data insights. HDF provides a better approach to data collection which is simple, secure, flexible and easily extensible.
Spark: By moving the computation into memory Spark enables a wide variety of processing, including: traditional batch jobs, interactive analysis, real-time streaming and
Spark enables applications in Hadoop clusters to run faster by caching datasets. With the data now being available in RAM instead of on disk, performance is improved dramatically, especially for iterative algorithms that access the same dataset repeatedly.
Apache Kafta is a free and open source distributed streaming platform. Apache Kafka for transport and Spark for analysis are becoming very common in the industry.
The data can also be ingested into the enterprise distributed data lake, traditional SQL databases and NoSQL databases where it can be used to power dashboards, reporting, interactive analysis, data mining and machine learning.
Apache Storm is a free and open source distributed realtime computation system. Storm makes it easy to reliably process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing. 
Data-in-motion is handled by HDF which collects the real-time data from the source, then filters, analyzes and delivers it to the target data store. HDF efficiently collects, aggregates and transports large amounts of streaming event data, processing it in real-time as necessary before sending it on. -ready open-source Apache Hadoop framework that completely addresses the needs of data-at-rest processing,
HDF was designed specifically to meet the practical challenges of collecting data from a wide range of disparate data sources securely, efficiently and over a geographically disperse and possibly fragmented network.
Streaming Analytics connects to external data sources, enabling applications to integrate certain data into the application flow.
Building next-generation big data architecture requires simplified and centralized management, high performance, and a linearly-scaling infrastructure and software platform. Big Data is now all about data-in-motion, data-at-rest and analytic applications.
Cisco UCS (Bare-metal option:
This CVD: §http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/Cisco_UCS_Integrated_Infrastructure_for_Big_Data_and_Analytics_with_Hortonworks_and_HDF.html describes in detail the process for installing Hortonworks 2.4.2 with Apache Spark, Kafka, Storm including the configuration details of the cluster. It also details application configuration for the HDF libraries.
The CVD explains how to build a HDP/HDF cluster using Cisco Fabric Interconnects and UCS Manager to deploy the the UCS C-series servers, along with scripts to install RHEL 7.2 and build the Ambari server and agents needed to deploy the Hadoop cluster and other applications.
Metacloud Openstack:
In a nutshell Metacloud is intended for customers who are looking for a public cloud experience delivered behind their own Firewalls. The hardware is installed in either their own brick and mortar Data Centers or in a carrier neutral COLO faciltiy.
Metacloud is a service where Cisco delivers, and lifecycle manages a deployment of both the hardware and a fully functional OpenStack out-of-the-box with day (2) management services with an 99.99% SLA for the availability of the Openstack services and API(s).
The Part (2) demo will provide insights into using Ansible as a deployment tool to orchestrate an end-to-end automation workflow.
Conclusion:
Metacloud makes OpenStack easier because the Cisco service includes full day (0-2) lifecycle management of the hardware (compute, network, and storage) along with the Openstack. OpenStack provides the customer admin access to the APIs. Consequently, the admins can leverage these API(s) to automate the deployment of virtual machines and other resources to host the analytical applications and Hadoop data cluster. For example, HEAT, Ansible or other orchestration toolsets can provide clients to invoke a one touch Hadoop workflow for building the entire platform.
In the past, bare metal was necessary from a performance perspective but lacked in a common framework for orchestration and life cycle management. With options for PCI pass-through or Ironic bare-metal service in OpenStack, performance in no longer a inhibitor of virtualized big data. In the aforementioned CVD it documents the process of building Cisco rack servers with UCS manager, and then scripting the ambari and Hadoop deployment. As previously mentioned, the Openstack APIs are more eloquent and holistic as a "one-touch" hadoop as a service delivery option. Secondly Metacloud provides the alternative to manage the underlying hardware, which removes another operational burden from the customer infrastructure teams, who would otherwise need to explore public cloud for a similar arrangement.
Part (2): Demo of deploying Hortonworks Hadoop cluster with Ansible.
Demo Video: Don't forget to set youtube to 720P HD....


What you will see in this Demo:
•Ansible to provision VMs, and storage in OpenStack
•Ansible to install Ambari server and agents on VMs
•Ansible to Launch Blueprints to build Hortonworks Hadoop cluster from APIs
•Run a simple Ansible playbook to configure and Yarn and Map Reduce to provide a word count

Download YAML and JSON files from Github:
Readme:

metacloud_ansible_hadoop

Deploying Hadoop with Ansible on Cisco Metacloud OpenStack
Step 1: Run Bash script to start ansible playbooks
~$ . hadoopplaybook
It invokes these playbooks hadoopvm.yaml #Creates volumes, kepairs, instances, and copies assigned IPs to host.txt and other txt files to create ansible inventory and /etc/hosts for the virtual machines
installambari-master.yaml #copy host files, mnt volume, install amabari server and agent
installambari-data1.yaml #copy host files, mnt volume, install amabari agent
installambari-data2.yaml #copy host files, mnt volume, install amabari agen
blueprintcluster.yaml # curl API commands to register and deploy hadoop cluster blueprint into Ambari

wordcount.yaml # configures and runs Yarn and Map reduce to provide a simple word count
Access Ambari with ambari-master IP address:8080
admin/admin
How to create blueprint for Hortonworks:https://community.hortonworks.com/articles/47170/automate-hdp-installation-using-ambari-blueprints.html

Please checkout my other cloud and ansible demos in my archives at http://bettaclowd.blogspot.com






Tuesday, April 4, 2017

Using Ansible 2.2 to program Cisco Nexus switches from the API(s)

Use Ansible to treat your Cisco network as Code!
DevOps in the NetOps... :)

Demo Video:

Introduction:
This demo uses Ansible 2.2 to program Nexus switches using the device API (called NX-API).

Ansible:
Ansible is an open source IT configuration management and automation tool. Similar to Puppet and Chef, Ansible has made a name for itself among system administrators that need to manage, automate, and orchestrate various types of server environments. Unlike Puppet and Chef, Ansible is agentless, and does not require a software agent to be installed on the target node (server or switch) in order to automate the device. By default, Ansible requires SSH and Python support on the target node, but Ansible can also be easily extended to use any API. In the Ansible modules developed for NX-OS as part of this project, Ansible modules make API calls against the NX-API to gather real-time state data and to make configuration changes on Cisco Nexus devices.
To review Nexus modules please see: http://docs.ansible.com/ansible/list_of_network_modules.html#nxosor from linux: /usr/share/ansible/cisco_nxos

Prerequisites:
VIRL is optional. An actual Cisco Nexus switch can be substituted. 
Get VIRL:http://virl.cisco.com/getvirl/
Virl on dcloud:https://dcloud-cms.cisco.com/demo_news/ansible-for-cisco-nexus-switches-v1
Virl on Devnet:https://devnetsandbox.cisco.com/RM/Diagram/Index/54657202-fa36-45d4-ac1c-f02ba6b31349?diagramType=Topology
Download Ansible: https://ansible-tips-and-tricks.readthedocs.io/en/latest/ansible/install/
Ansible/Cisco Nexus how to on Devnet:https://developer.cisco.com/site/nx-os/docs/automation/ansible/#what-is-nexus-nx-api
Download the Ansible playbooks .yml files from this Demo: https://github.com/andubiel/ansiblenexusbeginner/archive/master.zip
Authenticating Ansible for your nexus:
Create file
~$ sudo vim .netauth
Make changes using a text editor (example above is using vim) such that it follows the format below. Your file MUST continue to look the one provided (just insert the username and password for your switches).
# the .netauth file
# make sure you input the proper creds for your device
---

cisco:
  nexus:
    username: "cisco"
    password: "cisco"
Enable NX-API on Switch:
***************************************************************************
NX-OSv is strictly limited to use for evaluation, demonstration and    *
NX-OS education. NX-OSv is provided as-is and is not supported by      *
Cisco's Technical Advisory Center. Any use or disclosure, in whole or  *
in part of the NX-OSv Software or Documentation to any third party for *
any purposes is expressly prohibited except as otherwise authorized by *
Cisco in writing.                                                      *
***************************************************************************

nx-osv-1# conf t
Enter configuration commands, one per line.  End with CNTL/Z.
nx-osv-1(config)# feature nxapi
Hosts file:
By default ansible will use a hosts file for inventory that is located in /etc/ansible
root@virl:/usr/share/ansible# cd /etc/ansible/

root@virl:/etc/ansible# ls

ansible.cfg  hosts  roles

root@virl:/etc/ansible#
When issuing the ansible-playbook command -i {name of hosts} can be used to point to an inventory located in the local file path instead of /etc/ansible/hosts.
root@virl:/home/virl/nxos-ansible/nxos-ansible# vi hosts

[all:vars]
ansible_connection = local

[spine]
#n9k1
#n9k2 

[leaf]    
nx-osv-1 
#nx-osv-2 
#csr1000v-1
Ansible PlayBooks .yml files for demo:
Download the Ansible playbooks .yml files from the Demo: https://github.com/andubiel/ansiblenexusbeginner/archive/master.zip
  • gather_data.yml
  • vlan-provisioning.yml
  • vlan-unprovisioning.yml
  • examples-static_routes.yml
Examining gather_data.yml:
---
- name: Gather Data
  hosts: leaf
  connection: local
  gather_facts: no

  tasks:

  - name:  get neighbors   
   nxos_get_neighbors: type=cdp host="{{ inventory_hostname }}"
    register: my_neighbors
  - name:  get routing table for mgmt VRF
    nxos_command:
      type: show
      host: "{{ inventory_hostname }}"
      command: "show ip route vrf management"
    register: my_routes
  - name: store to file 
    template: src=templates/data.j2 dest=data/{{ inventory_hostname }}_gather_data.json
Run an Ansible 2.2 playbook:
root@virl:/home/virl/nxos-ansible# ansible-playbook -i hosts gather_data.yml 

PLAY [Gather Data] *************************************************************
TASK [get neighbors] ***********************************************************
ok: [nx-osv-1]
TASK [get routing table for mgmt VRF] ******************************************
ok: [nx-osv-1]
TASK [store to file] ***********************************************************
changed: [nx-osv-1]
PLAY RECAP *********************************************************************
nx-osv-1                   : ok=3    changed=1    unreachable=0    failed=0   

root@virl:/home/virl/nxos-ansible/nxos-ansible# 
Examine .json Output from the Playbook
root@virl:/home/virl/nxos-ansible/nxos-ansible# cat data/nx-osv-1_gather_data.json 

CDP Info:

{
    "ansible_facts": {
        "neighbors": [
            {
                "local_interface": "mgmt0", 
                "neighbor": "csr1000v-1.virl.info", 
                "neighbor_interface": "GigabitEthernet1"
            }, 

            {
                "local_interface": "Ethernet2/1", 
                "neighbor": "csr1000v-1.virl.info", 
                "neighbor_interface": "GigabitEthernet2"
            }
        ]
    }, 
    "changed": false
}

Route Info:
{
    "changed": false, 
    "commands": "show ip route vrf management", 
    "proposed": {
        "cmd_type": "show", 
        "commands": "show ip route vrf management", 
        "text": null
    }, 
    "response": [
        {
            "body": {
                "TABLE_vrf": {
                    "ROW_vrf": {
                        "TABLE_addrf": {
                            "ROW_addrf": {
                                "TABLE_prefix": {
                                    "ROW_prefix": [
                                        {
                                            "TABLE_path": {
                                                "ROW_path": {
                                                    "clientname": "direct", 
                                                    "hidden": "false", 
                                                    "ifname": "mgmt0", 
                                                    "ipnexthop": "https://www.linkedin.com/redir/invalid-link-page?url=172%2e16%2e1%2e65", 
                                                    "metric": "0", 
                                                    "pref": "0", 
                                                    "stale": "false", 
                                                    "stale-label": "false", 
                                                    "ubest": "true", 
                                                    "unres": "false", 
                                                    "uptime": "P4DT20H26M3S"
                                                }
                                            }, 
                                            "attached": "true", 
                                            "ipprefix": "https://www.linkedin.com/redir/invalid-link-page?url=172%2e16%2e1%2e0%2F24", 
                                            "mcast-nhops": "0", 
                                            "ucast-nhops": "1"
                                        }, 
                                        {
                                            "TABLE_path": {
                                                "ROW_path": {
                                                    "clientname": "local", 
                                                    "hidden": "false", 
                                                    "ifname": "mgmt0", 
                                                    "ipnexthop": "https://www.linkedin.com/redir/invalid-link-page?url=172%2e16%2e1%2e65", 
                                                    "metric": "0", 
                                                    "pref": "0", 
                                                    "stale": "false", 
                                                    "stale-label": "false", 
                                                    "ubest": "true", 
                                                    "unres": "false", 
                                                    "uptime": "P4DT20H26M3S"
                                                }
                                            }, 
                                            "attached": "true", 
                                            "ipprefix": "https://www.linkedin.com/redir/invalid-link-page?url=172%2e16%2e1%2e65%2F32", 
                                            "mcast-nhops": "0", 
                                            "ucast-nhops": "1"
                                        }
                                    ]
                                }, 
                                "addrf": "ipv4"
                            }
                        }, 
                        "vrf-name-out": "management"
                    }
                }
            }, 
            "code": "200", 
            "input": "show ip route vrf management", 
            "msg": "Success"
        }
    ]
}
root@virl:/home/virl/nxos-ansible/nxos-ansible
Please see above video for additional examples. https://youtu.be/wt4o9ebLGSQ
Thank you and stay tuned for more demos @ http://bettaclowd.blogspot.com/