Acronyms and Terms

This is a permanently incomplete list of all the terms used in this new ‘programmability’ and ‘sdn’ space.  It’s suggested to look these terms up somewhere else (lmgtfy) for additional perspective.

If everything is confusing and you don’t know what SDN, automation, OpenFlow, or any of that crazy stuff is and want it explained like you’re 5, check out this link.

Have a question or disagreement about a term here or one that’s missing? Hit up the Mailing List.

 

To alphabetize, click the small arrow next to the Acronym in the title below.

Acronym
Definition
SDN Software Defined Networking. This is a term that’s extremely loosely defined and debated. To some it means enabling software to do all the networking decisions and leave hardware to do the forwarding. In other terms it means allowing applications and software interact with the network – where the application can directly change network things like bandwidth, routing, delay, and other things that normally don’t change in a network. In another world, it means creating a whole new network on top of the old one that is entirely software based, or known as an overlay. In this view you’re running a hardware network and then a software network on top. Others may view SDN as running physical ‘things’ like routers and switches in software, such as a virtual machine.
OpenFlow  OpenFlow is one example of a SDN technology, and is often linked to the definition of SDN. Inside a router there are effectively two parts, one that makes intelligent decisions (control plane) and the other that simply sends packets the right way (data plane). OpenFlow moves the control plane of many devices to one place (or more) places so that all that is left inside the hardware is the simpler data plane which is handled in hardware. This would allow a network to make all decisions in one place, with the goal of simplifying the network devices and protocols. OpenFlow has different versions, with the most popular currently being 1.0 and 1.3.
Controller A controller is the master in the master/slave relationship between different SDN and programmability components. Most often this is a OpenFlow controller, but can also be a piece of software that uses other protocols.
NFV Network Functions Virtualization. NFV is the concept of virtualizing existing network services for the purpose of automation or orchestration. In short, NFV is turning routers, firewalls, load balancers, and other services into virtual machines for better agility. Much of this has happened naturally, but much of what NFV looks to solve is the automation and orchestration of those new virtual machines. NFV today is largely being driven by service provider networks, though the use cases are widely applicable. It’s also tied to concepts like Service Chaining, and is arguably related to SDN.
Overlay Networks The Overlay Network is a method of creating a new network on top of an existing network. This is similar to tunneling, VPNs, GRE, and even Virtual machines on physical hardware. An overlay network doesn’t know anything about the physical “underlay” network underneath, so it can create completely new rules and routing policies no matter what is underneath it. This can make creating a new environment easy since there isn’t a dependency on the ‘other’ network. There can be multiple overlay networks for any ‘underlay’ network so you can have a separate overlay for each application, environment, or tenant. This is one of the enablers of Microsegmentation. Examples of Overlay technologies: MPLS, VXLAN, NVGRE, DMVPN, OTV
OpenStack OpenStack is an open source project to bring together datacenter resources into a single architecture. Another way to think of it would be an open-sourced cloud. OpenStack aims to create software to abstract away infrastructure differences – bring your own computing, network, and storage and once OpenStack is installed,  in theory the infrastructure details are not important because all changes are done though the abstracted API offered from OpenStack. Think about what Linux did to the operating system, and OpenStack aims to do the same for the datacenter/cloud. It’s broken into projects for each type of infrastructure or application. Common modules include: Nova (computing), Quantum (old networking module), Neutron (networking module), Cinder (block storage), and Heat (automation). Openstack website explanation.
DevOps DevOps is a combination of technologies, tools, processes, and mindsets that aim to reduce the friction between Development and Operations. By introducing automation, a sharing culture, testing, and communication both groups are able to operate better. Think about taking someone from the development team and putting them on operations, and vice versa so that their incentives are aligned. This movement has created environments where changes are pushed to production hundreds or thousands of times a day with high consistency, as well as increasing the scale operations can handle with the same staff. For networking people, it’s a glimpse into a crystal ball of the challenges and effects automation can have on a broad set of IT professionals. The problems DevOps has solved thus far have been very server, cloud, and operating system (Sys Admin) focused, and networking is coming into the fray recently.
Network Programmability This term generally refers to the ability of a infrastructure device (like a switch, router, or firewall) to be modified or created by software. Typically this is done via an API, though other protocols such as NETCONF/YANG, SNMP, OnePK, and OpenFlow could potentially be used. The end result is to allow faster and more accurate programming of devices as opposed to more manual methods like CLI.
API Application Programming Interface. APIs allow other pieces of software to view or change mechanisms internal to the software beneath the API. Through an API call you may find out how much memory is free on a server, or alternatively tell a service to create a new virtual machine. Popular protocols for this are REST and SOAP. APIs have been around for a very long time, but they are a new concept to infrastructure which has historically been exposed via SNMP or NETCONF.
REST Representational State Transfer. This describes a high level architecture of how the API should act, and API’s that conform to this ideal are RESTful. This refers to the ability to use standard HTTP requests (GET, PUT, POST, etc) to a device to retrieve or place data. Place a call similar to PUT http://router/api/route/add with JSON data of “{ route: “192.168.1.0 /24”, next_hop: “10.1.1.1” } may change the router’s route table to add a new route of 192.168.1.0/24 to 10.1.1.1. Typically the data format within the HTTP body will be JSON, but XML is also common. Wikipedia is always good.
HTTP Hypertext Transport Protocol. The protocol of the web, though primarily associated with web pages. Where most are familiar with HTTP requesting web content in the body, the same protocol can be used to carry other data (voice calls, torrent data, etc). For the purposes of API’s it typically carries JSON or XML data rather than a web page. It includes a number of methods such as GET, PUT, DELETE, and POST which can describe different actions when used with the same URL and data.
XML Extensible Markup Language. XML is often used with web data types, though it is not limited to web technologies. It allows for nested grouping of information into a data structure, as well as a number of more advanced features with versions, attributes, and encoding types. It is also widely used in API’s. A sample XML data set may look like this:<header>

<unit1>one</unit1>

<unit2>two</unit2>

</header>

JSON Javascript Object Notation. Despite it’s name, it’s a language-independent data format that has support across many languages. It is an alternative to XML, though many interfaces support both. JSON is the more popular format for RESTful interfaces currently.  Wikipedia is always good.
SaaS, PaaS, IaaS Software, Platform, Infrastructure, Metal as a Service. Applications sit on platforms, and platforms sit on infrastructure. Picture a web site using Apache that runs on Linux, running on a server. The website is the application, Linux or potentially the HTTP environment would be the platform, Linux and the associated storage and networking is the infrastructure, and the Cisco server would be the Metal. Software as a Service is typically a full application delivered from the cloud, with no infrastructure near the end user. Platform as a Service is typically a software stack or application that can host many types of applications itself, but is still software (Heroku, Database services).  PaaS as Service tends to be developer oriented, as the only thing in your control is the application and operating system level down is managed.  Infrastructure includes the entire environment – computing (memory/storage/processor), storage, networking, and security.SaaS: Webex, Salesforce.com

PaaS: Most Iaas providers have PaaS offerings, as well as Heroku and others

IaaS: Amazon Web Services (AWS), Azure, Google App Engine (GAE)

AWS, Amazon Amazon Web Services. AWS is a cloud IaaS, PaaS, and an array of other services delivered via the cloud. It’s the most popular IaaS provider and the most well known. It has a wide array of application, API, and ecosystem capabilities. It contains it’s own dictionary of terms – instances, AMI’s, EC2, VPC, region, availability zone, S3, Glacier, Beanstalk, Cloudfront, Marketplace, etc. For a new user, a virtual machine or appliance can be running in a few minutes, and expanded quickly. There is quite a bit of documentation here.
Arduino Arduino is a open source microprocessor board used in many embedded systems designs. You can buy an Arduino for less than $30 and there are reams of documentation for it. Arduino makes it easy to buy any number of cheap electronic components- infrared sensors, temperature sensors, switches, relays, motors, existing electronic circuits, etc and create software to control them. The code is written in C, and is low level, but can be paired with something like a Raspberry Pi to handle higher level operations that may require an operating system like a web server.
Raspberry Pi The Raspberry Pi is a credit card sized computer that is great for building projects around. It is easily adapted to plug into things around it, and exploring new programming languages. There is a large Maker community around this device and is a great way to make use of even a small amount of programming knowledge. The website.
Hadoop, Big Data Hadoop gets thrown in with SDN, Automation, and Cloud because it’s a loosely related trend and a ‘killer app’ for most businesses so it’s worth knowing about. It’s an open source framework for storing and doing analysis on very large data sets. Most traditional relational databases (most everything except Hadoop – SQL, Oracle, MySQL) cannot do analysis on data that won’t fit into memory and also require the data to be structured. Structured data is an excel spreadsheet, and unstructured data would be a video. One of the biggest values from Big Data is the ability to map parts of many types of data to come to conclusions. Hadoop has a litany of other terms associated with it – HDFS, YARN, Hive, HBase, Pig, R, MapReduce, and many more.
Hypervisor The hypervisor is an abstraction that sits between a Virtual Machine (VM) and the hardware. A hypervisor allows applications installed on top of it (Virtual Machines) to believe they’re running on their own hardware. This makes consolidation of applications on a single piece of hardware possible. Examples of hypervisors: VMware ESXi, ESX, KVM, Xen, and Microsoft Hyper-V.
OpenDaylight (ODL) OpenDaylight is an open-source project run by the Linux foundation. It aims to bring many of the large vendors together to create an open-source SDN controller. It’s currently an OpenFlow based controller with a number of subprojects. OpenDaylight has integrations in OpenStack and other open source software.
Northbound Interface The Northbound Interface or API is what a device or application exposes to other applications to collect data or modify the device. This allows for other pieces of software to control or monitor the application exposing the API. In the case of a controller, the Northbound API would expose network properties so that applications or orchestration tools can change or monitor network properties. The most common type of Northbound interface is a RESTful HTTP API using JSON or XML.
Southbound Interface The Southbound Interface, protocol, or API is what a controller or application uses to control devices or other pieces of software. Often the Southbound protocol is master/slave where the master has full control of the devices it communicates with. The Southbound protocol may also include device management or reporting capabilities. Southbound interfaces are more varied, as devices are still very fragmented with their support. Examples are: OpenFlow, OpFlex, SNMP, BGP, PCEP, NETCONF, CLI, and OVSDB. Sometimes these different protocols are made generic in a Service Abstraction Layer (SAL).
East-West Interface When there are a group of sub-applications inside a larger application, API’s may be exposed within the application that aren’t public. These may be inner workings that are not designed to be exposed or would have a security risk, or are largely useless outside of the inner application mechanisms. These would be East-West API’s. The external API’s would likely be Northbound or Southbound Interfaces. In some scenarios the Northbound interface may just proxy and consolidate existing East-West API’s.
Virtual Switch A virtual switch is most often referred to the in-host switching of a hypervisor. Inside a host with virtual machines, there is only one network connection but many virtual machines each with their own network requirements. To solve this problem, a virtual switch is created within the hypervisor in order to deliver policy and connectivity to each virtual machine. This virtual switch also interacts with the physical network and may use technology such as LACP, Spanning Tree, 802.1q, and other traditional switching protocols. Some examples of virtual switches are: VMware’s vSwitch and Distributed Virtual Switch (DVS), Cisco’s Nexus 1000V (N1k), and Open vSwitch (OVS).
Git / GitHub Git is a source code management and distributed revision control tool. It’s used to keep different versions of code organized into branches and provides the ability to track code changes. It’s also tightly integrated with GitHub, an online tool for synchronizing code changes to an online repository. GitHub is the most widely used way to share open source code today, due to it being free for public repositories and it’s ease of use.
Python  Python is a programming language, typically used for scripting. It’s dynamic and easier to learn than many other languages, and has a large amount of community driven materials including examples, libraries, SDKs, and community forums. There are currently two major versions of Python 2.7 and 3.0.  The large majority of Python scripts available are compatible with 2.7, and less for 3.0 though that is changing over time. Check that the version you’re using matches the documentation you’re working with.
SDK  Software Development Kit. Typically an SDK is available for a process or application where there is already an existing API, but the management and control of those API’s can be made easier with a set of libraries to organize and manage lower level details. An example would be the AWS SDK’s that can handle the authentication details more easily than what’s required with raw REST requests.
Ubuntu  Ubuntu is a popular linux distribution created by Canonical. It’s the most popular cloud operating system, largely because it’s free and easy to use. Ubuntu comes in both Server and Desktop versions. There are also Long Term Support (LTS) versions that are popular base operating systems for applications and tool support. Ubuntu also has a large community support ecosystem.  Alternatives would be other distributions, such as RedHat. For other distributions check this site out
RHEL  RedHat Enterprise Linux. RedHat has made their business by open sourcing their distribution of Linux, RHEL. They have made their business model on the support of this operating system, and have a large portion of the enterprise Linux market. The free distribution of the RHEL code is CentOS, and usually has similar application support since it’s based on the same open-source code.
x86  x86 is a processor instruction set from Intel. If an application is x86 compatible, it means it’s capable of running a large ecosystem of hardware and can likely be virtualized. Most other processor instruction sets (Power, ARM, Mainframe sets) have smaller hardware and software ecosystems and are less likely to support virtualization.
Waterfall Model  The Waterfall methodology is a software development cycle where each phase of software development is reviewed and finished before starting the next. There is more effort spent in the beginning on identifying and fixing faults before they’re pushed on to the next phase. Waterfall has come under criticism from more recent methodologies like Agile. Many software development groups are in the process of migrating from Waterfall to Agile due to faster release cycles with higher quality.
Agile  Agile is a software development methodology that focuses fast iteration of development over completeness or formal plans. Agile software groups may develop one or more releases of software per day, which iteratively improves software faster. The concept of many software releases per day is a challenge for many organizations that have been built around releases every week, month(s), or longer. Agile is a large driver of efficiencies in the DevOps group, and also includes project management methodologies such as SCRUM. Agile is seen to be a replacement or enhancement to the Waterfall method.
Scrum  Scrum is an Agile project management methodology that focuses on fast delivery and iteration. Scrum may include examples like  daily 15 minute meetings where new features are scrapped, created, or changed as requirements and timelines change. It is a change in process from longer more formal release cycles which are thought to be less efficient and slower. It is easier to be fast to adapt to change than it is to understand all the requirements fully before creating something.
SDLC  Systems/Software Development Life Cycle. This is the set of processes that happen in software development, from concept to production. It’s good to know the terminology and how much happens to software before it is sent to the operations and infrastructure teams. Both Agile and Waterfall are software development methodologies within the SDLC. Part of the problem DevOps looks to solve is how to get operations/infrastructure/security people involved earlier in this lifecycle as problems are cheaper to fix the earlier they are found.
Puppet  Puppet is an open source configuration management tool. It’s used primarily for the configurations of Linux hosts, as well as Windows. Configuration state is added into what’s called a Puppet manifest, using Puppet’s declarative language. With more and more systems taking Linux as a base operating system, Puppet’s scope can expand to include switches, routers, automation systems, and packages like OpenStack. Puppet is used for large scale system management, and is considered to be a DevOps tool. It’s major alternatives are Chef and CFEngine, though Puppet currently has more traction in the community.
Chef  Chef is an open source configuration management tool. It’s used for the configuration of Linux hosts primarily. The configuration is stored into a Chef Recipe, which is written in Ruby. Chef is often compared to Puppet, as they serve the same purpose of configuration management. Chef has the advantage of being more powerful with the Ruby language in the recipes, but has a steeper learning curve. It’s also part of the DevOps toolset.
Declarative   The declarative method focuses around stating the end goal of systems rather than explicitly stating the steps required to meet policy acceptance. In short, what should happen instead of how it should happen. Declarative systems require autonomous systems to be able to process their own steps required to meet a policy goal. An example is a conductor in an (actual) orchestra. The conductor states what needs to happen when, but is not responsible for creating every note from every instrument. This is also referred to as a ‘herding the cattle’ method. If the systems responsible for making the change cannot for any reason, they raise a fault with the system that requested the policy change. Faults are then handled on a case by case basis. The alternative to the Declarative model is Imperative.
Imperative   The imperative programming methodology is an explicit step by step procedure with errors handled within the logic. This allows for deeper control in case of an error, but the logic must be aware of all types of errors and how they should be handled. This becomes topical with automation and policy deployment at scale as systems have historically been imperative, and many newer technologies are declarative which is the opposite of imperative. In large systems with automation, having explicit logic for every step for every system as well as knowledge of how to handle all types of errors begins to impact scale and robustness.