Federated learning operates through a central server that coordinates model training across multiple devices. Each device trains a model locally using its own data and then shares only the model updates, not the data itself, with the central server. The server aggregates these updates to form a global model, which is then redistributed to the devices for further tuning. This iterative process continues until the model converges.