

This is where AI could be useful. Transcribing and summarising the voice recordings.


This is where AI could be useful. Transcribing and summarising the voice recordings.
Fiat Punto.
Exam cars in my area were puntos so schools tended to teach the same model.


I think we can assume it’s nvidia H200 which peaks at 700W what what I saw on Google. Multiply that by the turnaround time from your prompt to full response and you have a ceiling value. There’s probably some queueing and other delays so in reality the time GPU spends on your query will be much less. If you use the API, it may include the timing information.


There’s a huge difference between a model you can run locally and a chatgpt model.


The question is about per-prompt, so number of users is not relevant. What may be more relevant is number of tokens in and out.
If anything, number of users will decrease power use per prompt due to economy of scale.
Everyone. One person is too lazy to write a message, the others can’t be bothered to listen to the whole thing 🤷♂️
The transcription should be attached to the audio recording so if the sender cares about it being correct they should be able to comment or add correction.