DigitalOcean
DigitalOcean provides the easiest cloud platform to deploy, manage, and scale applications of any size, removing infrastructure friction and providing predictability so developers and their teams can spend more time building — whether you're building world-changing AI apps, running a side project, or building a business. In this channel you can learn to build with DigitalOcean, other popular tools and hear about the latest developments in the tech world. 🌎
Channel created: July 30, 2012
59,800
Subscribers
1,292
Total Videos
18,705,710
Total Views
589
Videos
Recent Videos
View All VideosThe Inference Economy: How Venture Is Betting on the Agentic Era
Leading investors break down the economics of scaling AI in production, from infrastructure bottlenecks to open vs. closed ecosystems, and share their predictions for what the AI industry will look...
Your Model Doesn't Matter. Your Infrastructure Does.
Everyone has access to the same models. So what actually matters? It's everything around them – routing requests to the right model, connecting to live data, scaling from prototype to production wi...
Open by Design: How NVIDIA and DigitalOcean Are Building the Stack for the Always-On Agentic Era
Kari Briski, VP Gen AI, NVIDIA, and Salman Paracha, SVP AI, DigitalOcean discuss why AI-native teams are demanding openness, model flexibility, and infrastructure built for agents that never sleep ...
Hard-Won Lessons from Teams Running High Volume Inference Workloads in Production
Scaling inference isn't a model problem. It's a decisions problem. Industry leaders from Workato Research Lab, ISMG, and Hippocratic AI share the decisions, tradeoffs, and investments that got them...
The Cost Cliff: Improve Your Tokenomics as you Grow, ft. Character.AI & Inferact.
70% of AI spend is now inference. See how Character.AI partnered with Inferact and DigitalOcean to cut inference costs by 50%, while improving throughput, on AMD GPUs. Speakers: Archana Kamath, VP...
Your Model Is Only as Good as Its Memory, with Weaviate
Your model isn't failing because it can't reason. It's failing because it doesn't have the right information at the right time. In this session, we'll dig into the data layer: the part of your AI s...
AI Disruptors: How the Next Generation of Business is Being Built
Early-stage founders building real AI companies today—what they’re solving, what’s getting in the way, and how they’re pushing through it. Moderator: Dinesh Murthy, Director of Product Management,...
DigitalOcean - Deploy Keynote - 2026
🚀 Join the Developer Cloud: https://cloud.digitalocean.com/registrations/new?utm_source=youtube&utm_medium=organic_video&utm_campaign=digitalocean&utm_content= // STAY CONNECTED 🌏 Follow our blog...
Don't use speculative decoding until you watch this
In this video, I benchmark speculative decoding with Llama-3-70B on an H100 GPU pairing it with 8B and 1B draft models, testing GPTQ and bitsandbytes quantization, and even trying ngram speculation...
What happens to AI reasoning quality when you compress a model? We tested it!
In this video, I benchmark Mistral-7B-Instruct-v0.2 on an NVIDIA H200 DigitalOcean GPU in three formats: FP16, INT8, and 4-bit AWQ — and test how precision impacts reasoning quality, speed, VRAM us...