How fast you can train your model depends on your hardware and your parallelism strategy. Knowing your hardware will guide you on which strategy you should use.
LLM-ATC handles orchestration for finetuning and serving LLMs. Just bring your data and let LLM-ATC work under the hood on the cloud provider of your choice.