Hi there !I need some advice. I'm starting a project at my school and I'd like to purchase a graphics card to use GPUs in my research. Which one would you recommend for learning Transformers and small models at a lower cost?In the future, I would also like to switch or test this project on a 100% ARM architecture using an Ampere CPU-based environment. To be able to compare solutions with and without a GPU.Thank you in advance for your answers !Regards,
Sounds like a fun project! Just curious—are you considering using something lightweight like llamacpp for local inference? I’ve been testing it for smaller setups and wondering how it performs in real IA applications. Would love to know what direction you're taking!
The only one direction i mean is to use a full native concept of IA, using the full potential of CPU, not throuhput a GPU, and using a real compiled program. Not througt a global script analyser like python.I think no inference maybe needed if you use a compiled program for the processor used really. :).