This was a "Live Blog" from the keynote this morning. Took me a bit to get it somewhat cleaned up and get access out to post this.
Ben on stage:
Recap of Day 1:
- #dockercon - #2 worldwide trending item on Twitter yesterday
- Keynote (using the Power of AND as a theme)
- Lessons learned on the path to production: custom scripts rarely scale, developers do not adopt locked down platforms, end to end matters for both dev and ops, build management & orchestration enables portability
- Ben talking about “Containers as a Service” - Build (Docker Toolbox) -> Ship (Registry Service) -> Run (Control Plane)
- Call back to yesterday and four layers of solutions - talking about creating a solution as an end to end flow
- Interesting that Run is called out as a Control Plane (and references Tutum on the next slide)
- 20% of all content pulled from Docker Hub is “official images”, but what about all the others? You know you can trust an official image. Project Nautilus was brought out to address this other 80%.
- Showing output of a Project Nautilus scan on the screen. It breaks down line by line each library used in a container
Docker Automated Builds:
- Talking about Automated Builds - 60k automated builds per week, 300% growth since January 2015. Automated Builds 2.0 is a rearchitecture of the system to address time and quality issues.
- New Build System uses per-Repo Dedicated Builders (you don’t share a build queue with anybody else anymore), starting a fresh build environment every time. This increasing time of parallel builds as well as guaranteed quality of a clean environment.
- Dynamic Matching is the other feature. Static mapping used before (you had to manually tag your builds), dynamic matching allows for variable based builds and more flexibility in the system over time
Docker Tutum:
- Now talking about Run phase (using Tutum) - Tutum guys on stage
- What is Tutum? - a cloud that allows code to production rapidly
Demo Time:
Talking about code from laptop into production - SaaS demo from yesterday (voting app)
- What will happen? Modify a feature, image created via Docker Hub autobuild, Image deployed in Tutum
- Showing Tutum visualizer - shows a visual representation of the app (both dev version and production version)
- The production version is deployed across regions in AWS as well in a private datacenter (balanced across both)
- Before they make a change to the app, showing the automated build in Docker Hub connected to GitHub
- Now modifying the application, commit to git repo, push to remote repo
- Showing Docker Hub changes and dynamic changes reflected from git
- Docker Hub builds the image and redeploys the image to production in automated fashion
- Take Away: Push to git and the automated workflow takes care of everything else in the build and push
- Now - to push to production from staging, Tutum shows a visual representation of the containers being upgraded. Production is upgrading in a rolling fashion automatically. “One click upgrade to production"
- What about resiliency in production? What is we take down a datacenter in production?
- Using the Tutum interface, wipe out a datacenter, Tutm redeployed the containers in a different datacenter and scaled back up to support the load (was actually a really cool demo)
- 3DExperience Company customer story slide on the screen now
- Customer on stage - Talking about consistency between development and production, simplification of tools for dev and ops, ability to deploy on their cloud, and the scalability and increased high availability provided by moving to Docker containers. This is a sneak preview of the results they have achieved.
- Showing a video of their product called HomeByMe (online 3D modeling of home improvements and planning) fully running on the new system
- The system has gone from concept to production in less than a year
Docker Universal Control Plane:
Scott Johnston (SVP, Product) on stage now
- Asked for raised hands on DockerCon - the vast majority (probably 80-90%) are first time attendees
- Asked for show of hands of who can’t put data in the clouds or can’t put control planes in the cloudss
- Production in the Cloud? Not for everyone due to compliance and security
- quoted Adrian Cockcroft “speed is the market share"
- Developers will always find a way to go fast, it’s their job
- We want Agility and Portability WITH Control
- This starts at the app level - How doe we know which images to trust, who signed an image and when, how to automate, etc.
- To support this, Docker Content Trust ad Docker Trusted Registry are now in sync with each other
- What about the Run aspect of all of this? What about the control plane?
- ANNOUNCMENT: Docker Universal Control Plane
- This was Project Orca - Integrated Stack for application deployment
- Self-Service App Deploys & Updates, Provisioning & Config of Heterogeneous Clusters, LDAP/AD Integration with Docker Trusted Registry, Native Docker API’s and CLI, Monitoring, Logging.
- Completes the end to end aspect of Containers as a Service
DEMO of Docker Universal Control Plane
- login to Docker Trusted Registry
- sign the app with Docker Content Trust
- push the app to the registry - show the app has been signed
- Now how to push it out and deploy it
- Flip over to Docker Universal Control Plane and login
- control plane sits on top of Swarm and integrated with Native API (to use Compose, etc.)
- Use Docker Compose to run the app - The control plane gives access based on LDAP credentials
- Control Plane auto detects the new build and adds it into the control plane dashboard
- Shows how many resources are being consumed per account, ops dashboard basically
- Now scale up the app by adding more containers to the voting app (from command line)
- Now talking about secret management to control variables and info
- Showing that secrets are based on the access control groups in LDAP (production is locked down vs. dev which is wide open)
- Now redeploy of the app using the secret to use that vs. the environment variables
- Control Plane allows you to roll credentials incase they are compromised, now do a docker compose restart
- Restrart and they showed the password has been changed and rotated
Docker Trusted Registry 1.4 is GA and Docker Universal Control Plane is 1.0 Beta as of today
Showing posts with label Live Blog. Show all posts
Showing posts with label Live Blog. Show all posts
Tuesday, November 17, 2015
Monday, November 16, 2015
DockerCon EU Day 1 Keynote "Live Blog"
Going to try something new here on The Cloudcast. It's been a long time since I did a blog, I'm at Dockercon EU this week and there was some interest on Twitter to get more info out about the keynote. Wireless was down during the show so this is a "semi-Live Blog". Might be some typos in here and this is a brain dump as things happened during the keynote.
- About 1500 attendees at the event
- Ben (CEO) on stage:
- Ben talking about Docker public image and that it is perceived as “just a developer tool”, they are much more
- Docker is about building tools of mass innovation - quote by Solomon
Stats Time:
- Docker has nearly 2000 contributors to the Docker project, over 10,000 pull requests
- global metope communities highlighted - 215 groups, 63 countries
- Over 60,000 project on GitHub have Docker in the title
- State of the Project:
- 240k dockerized applications, 1.3billion Docker Hub pulls, 5.6M Docker Hub pulls per day
- Docker has evolved from a container technology into an entire ecosystem of tools
- Open Container Initiative - 35+ members, 253 github forks, 130 contributors
- Docker used for stateful as well as stateless apps - really started as stateless and is growing into the other
- Docker in production - (see the DataDog study, a lot of stats used from that) - 8 surprising facts about Docker Adoption (google it)
- Docker in Production means making Docker much better and more robust. Must be portable and good for dev as well as ops, Secure and Extensible
Docker Stack:
- Solomon (Founder/CTO) up on stage now:
- Solomon talking about the Internet (lots of upgrades, doesn’t go down, ultimate at scale system)
- The biggest obstacle right now is software walled gardens, it stands between an eager developer and the Internet
- Docker is building an open software layer to make the Internet programmable
- Solomon talking about the Docker Stack - 4 layers in a building is the example
- Layer 1 = Standards. Let’s get everyone to agree on a way to interoperate
- Layer 2 = Infrastructure. The “plumbing” that enables everything to happen
- Layer 3 = Dev Tools. A collection of tools to help developer experience the best it can be
- Layer 4 = Solutions. How do you solve real word problems? What is the final answer? This is solutions
Docker Quality:
What is left after you ship a feature, Quality is making a feature work every time, for every user - Quality is security, reliability, handling failures gracefully
- What has Docker been up to? Quality tools for developers...
- first up, usability of tools, Solomon admits they have been working on usability of tools. Talking about docker compose right now, it is the “developer entry point” into the ecosystem. It is the must use tool for developers. As of the last release, can now do “magical” service discovery, can now use a micro-service architecture without rewriting code, and can now build persistent services with volume management
- Working on making the “little things” better for developers (virtual box integration issues, UI glitches, low priority bugs, better error messages) - lots of unglamorous work
- Working up to a story and a demo. Story of a developer on the first day of work. How soon could be developing an application? - Simple as download the Docker Toolbox and run one command.
Docker Security:
- Solomon talking about “usable security” - developers care about usability, not security. They care about security, as long as it doesn’t affect usability, otherwise they will just find a way around it
- How to give developers usable security? How do we move beyond Docker Content Trust and Notary?
- Docker Content Trust + hardare crypto = the ability to survive almost any key compromise (double layer of protection provided so you can rotate keys and replace as needed as long as the root key is kept safe)
- Announcement: Docker and yubico - hardware crypto key for Docker Content Trust
(Demo of the product) - plug the hardware key into the laptop, enable Docker Content Trust, docker push to Docker Hub, touch the key (physically) to prove you are a human and this isn’t a “bot” or something malicious, enter a password, done.
- LOL - made a backup copy of his keys and then published to github public - not a good thing
- Security team rotated the private key to prevent a compromise, tried the demo again and of course it failed because of key rotation. Was actually a very entertaining demo
- Take Away: With the right tools, any developer can become a secure software publisher
- Isolation of a container in Linux was difficult because so many things “make” a container. Over time this has improved. The last two left are really seccomp and user namespace
- The last two have been tackled in the Swarm/Engine experimental builds
- Huge question with a lot of different answers - “Am I running vulnerable containers?"
- Announcement: Introducing Project Nautilus - Built-in container security analysis in Docker Hub - trigger an automated scan anytime a container is pushed to Docker Hub
- soft launch 2 months ago, over 74 millions pulls to date already scanned, self service coming soon
- Benefits of this approach - Detect vulnerabilities regardless of the Linux Distribution, discovery of new vulnerabilities in Linux distributions and collaborate with communities to fix them, developers can use their favorite package manger (probably not the one that shipped with the distro)
- Take away: You can be secure without lock in to a specific distro
Docker at Scale:
- Next topic and Demo - Swam at scale
- Took the demo (Day 1 app and scaled this up to 1000 nodes in Swarm) - Now using swarm bench to scale this up to 50k containers across 1000 nodes. Once they are up and running, Swarm scheduler balances them across the cluster - real time this was done in less than an hour.
Note: Swarm tested to 50k containers but that was a limitation of EC2 right now. They expect to have better numbers in the future. Docker is dedicated to making Swarm the most scalable and usable system in the industry
Disclaimer: The Cloudcast was a media sponsor of Dockercon EU
- About 1500 attendees at the event
- Ben (CEO) on stage:
- Ben talking about Docker public image and that it is perceived as “just a developer tool”, they are much more
- Docker is about building tools of mass innovation - quote by Solomon
Stats Time:
- Docker has nearly 2000 contributors to the Docker project, over 10,000 pull requests
- global metope communities highlighted - 215 groups, 63 countries
- Over 60,000 project on GitHub have Docker in the title
- State of the Project:
- 240k dockerized applications, 1.3billion Docker Hub pulls, 5.6M Docker Hub pulls per day
- Docker has evolved from a container technology into an entire ecosystem of tools
- Open Container Initiative - 35+ members, 253 github forks, 130 contributors
- Docker used for stateful as well as stateless apps - really started as stateless and is growing into the other
- Docker in production - (see the DataDog study, a lot of stats used from that) - 8 surprising facts about Docker Adoption (google it)
- Docker in Production means making Docker much better and more robust. Must be portable and good for dev as well as ops, Secure and Extensible
Docker Stack:
- Solomon (Founder/CTO) up on stage now:
- Solomon talking about the Internet (lots of upgrades, doesn’t go down, ultimate at scale system)
- The biggest obstacle right now is software walled gardens, it stands between an eager developer and the Internet
- Docker is building an open software layer to make the Internet programmable
- Solomon talking about the Docker Stack - 4 layers in a building is the example
- Layer 1 = Standards. Let’s get everyone to agree on a way to interoperate
- Layer 2 = Infrastructure. The “plumbing” that enables everything to happen
- Layer 3 = Dev Tools. A collection of tools to help developer experience the best it can be
- Layer 4 = Solutions. How do you solve real word problems? What is the final answer? This is solutions
Docker Quality:
What is left after you ship a feature, Quality is making a feature work every time, for every user - Quality is security, reliability, handling failures gracefully
- What has Docker been up to? Quality tools for developers...
- first up, usability of tools, Solomon admits they have been working on usability of tools. Talking about docker compose right now, it is the “developer entry point” into the ecosystem. It is the must use tool for developers. As of the last release, can now do “magical” service discovery, can now use a micro-service architecture without rewriting code, and can now build persistent services with volume management
- Working on making the “little things” better for developers (virtual box integration issues, UI glitches, low priority bugs, better error messages) - lots of unglamorous work
- Working up to a story and a demo. Story of a developer on the first day of work. How soon could be developing an application? - Simple as download the Docker Toolbox and run one command.
Docker Security:
- Solomon talking about “usable security” - developers care about usability, not security. They care about security, as long as it doesn’t affect usability, otherwise they will just find a way around it
- How to give developers usable security? How do we move beyond Docker Content Trust and Notary?
- Docker Content Trust + hardare crypto = the ability to survive almost any key compromise (double layer of protection provided so you can rotate keys and replace as needed as long as the root key is kept safe)
- Announcement: Docker and yubico - hardware crypto key for Docker Content Trust
(Demo of the product) - plug the hardware key into the laptop, enable Docker Content Trust, docker push to Docker Hub, touch the key (physically) to prove you are a human and this isn’t a “bot” or something malicious, enter a password, done.
- LOL - made a backup copy of his keys and then published to github public - not a good thing
- Security team rotated the private key to prevent a compromise, tried the demo again and of course it failed because of key rotation. Was actually a very entertaining demo
- Take Away: With the right tools, any developer can become a secure software publisher
- Isolation of a container in Linux was difficult because so many things “make” a container. Over time this has improved. The last two left are really seccomp and user namespace
- The last two have been tackled in the Swarm/Engine experimental builds
- Huge question with a lot of different answers - “Am I running vulnerable containers?"
- Announcement: Introducing Project Nautilus - Built-in container security analysis in Docker Hub - trigger an automated scan anytime a container is pushed to Docker Hub
- soft launch 2 months ago, over 74 millions pulls to date already scanned, self service coming soon
- Benefits of this approach - Detect vulnerabilities regardless of the Linux Distribution, discovery of new vulnerabilities in Linux distributions and collaborate with communities to fix them, developers can use their favorite package manger (probably not the one that shipped with the distro)
- Take away: You can be secure without lock in to a specific distro
Docker at Scale:
- Next topic and Demo - Swam at scale
- Took the demo (Day 1 app and scaled this up to 1000 nodes in Swarm) - Now using swarm bench to scale this up to 50k containers across 1000 nodes. Once they are up and running, Swarm scheduler balances them across the cluster - real time this was done in less than an hour.
Note: Swarm tested to 50k containers but that was a limitation of EC2 right now. They expect to have better numbers in the future. Docker is dedicated to making Swarm the most scalable and usable system in the industry
Disclaimer: The Cloudcast was a media sponsor of Dockercon EU
Subscribe to:
Posts (Atom)