This course is designed for developers aiming to harness the capabilities of generative artificial intelligence in their software development processes and their application services. It provides a comprehensive overview of generative AI, covering its significance, foundational concepts, and practical applications.
The training will explore prompting frameworks to enhance the application lifecycle, engage with essential AI tools and plugins, and learn to utilize code assistants within integrated development environments (IDEs) for generating code and complete websites.
It also addresses the management of API calls to large language models (LLMs), the use of context-aware frameworks, and the deployment of AI solutions using cloud provider tools such as Google Cloud's Vertex AI and Colab notebooks.
By the end of the course, participants will be equipped with the knowledge and skills necessary to effectively integrate generative AI into their development workflows, fostering innovation and efficiency.
This part equips developers with practical AI skills across the full software development application lifecycle. It covers foundational LLM concepts, prompting techniques for coding tasks, an LLMs clients overview, and AI-powered coding tools in IDEs and on the command line. At the end, developers have an immediately operational toolkit to accelerate their daily tasks.
- Overview of Generative AI and its significance
- Key concepts and terminology
- Use cases in software development
- Understanding prompting techniques
- Tips for ideation and architecture design
- Refactoring and generating tests using AI
- Overview of main plugins and their functionalities
- Mixing plugins and using presets
- Introduction to Retrieval-Augmented Generation (RAG)
- Features of code assistants
- Generating code snippets and tests
- Creating a website from prompts
This part focuses on embedding generative and agentic AI directly into software products, services, and architectures. It covers LLM API consumption, context-aware frameworks, RAG pipelines, and the shift toward autonomous agent systems using framework (ex langchain) and protocols ( ex: MCP). It also introduces node-based tools for visually orchestrating complex multi-agent workflows without deep low-level implementation.
- Managing LLM API calls in JSON mode and handling structured outputs
- Use context-aware frameworks with techniques for prompt templating and chaining
- Usage of vector databases for chain-of-thought creations
- Retrieval-Augmented Generation (RAG) structured and unstructured.
- Understanding Agent with Langchain Agents
- Model Context Protocol (MCP)
- MCP clients and servers
- Agentic architectures and their applications
- Node based genAI tools overview
- N8N introduction
- Agentic architecture with node based tools
- Overview of Cloud tools (Vertex AI, Google Colab notebooks, etc.)*
- Utilizing AI Cloud APIs (Text-to-Speech, Translation, etc.)
- Best practices for deploying AI solutions in the cloud
We design payments technology that powers the growth of millions of businesses around the world. Engineering the next frontiers in payments technology
- European leader in payment and secured transactions.
- Over 50bn transactions/year
- A huge & diverse tech-stack
Contributors
- Ibrahim Gharbi
- Sylvain Pollet Villard
- Yassine Benabbas
- Raphaël Semeteys
Sponsors
- Yacine Kessaci
- Liyun He Guelton
- Fanilo Andrianasolo
- Vijayanand Premnath
- Vincent Caquelard
- Mat Goodger
- Effan Mutembo
- Cyril Cauchois
- Martin Boulanger
- Julien Carme
🔗 blog.worldline.tech 🔗 @WorldlineTech
Worldline © 2026 | Tech at Worldline
