Outshift | Understanding LLMs: Attention mechanisms, context windows, and fine tuning