We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
Hi there !I need some advice. I'm starting a project at my school and I'd like to purchase a graphics card to use GPUs in my research. Which one would you recommend for learning Transformers and small models at a lower cost?In the future, I would also like to switch or test this project on a 100% ARM architecture using an Ampere CPU-based environment. To be able to compare solutions with and without a GPU.Thank you in advance for your answers !Regards,
The only one direction i mean is to use a full native concept of IA, using the full potential of CPU, not throuhput a GPU, and using a real compiled program. Not througt a global script analyser like python.I think no inference maybe needed if you use a compiled program for the processor used really. :).