Researchers found some LLMs create four times the amount of CO2 emissions than other models with comparable accuracy. Their findings allow users to make informe
Obviously. But I have no context on how much my actions create co2 in the first place. I assume driving a car generates a majority of it, or maybe heating the house, but I still don’t have any clue how many kilograms that might be. But what I do know is how many kilowatts my house consumes electricity and at least roughly how much our appliances use, so if you want to try and blame me for consuming precious resources by generating text or watching a video at least give me an measurement I can easily comprehend.
Generally, heating and cooling are the main energy consumption for domestic purposes. next up is the car, and then electrical consumption. (from what i remember).
as long as you don’t take a transatlantic trip your fine:
“For example, having DeepSeek R1 (70 billion parameters) answer 600,000 questions would create CO2 emissions equal to a round-trip flight from London to New York. Meanwhile, Qwen 2.5 (72 billion parameters) can answer more than three times as many questions (about 1.9 million) with similar accuracy rates while generating the same emissions.”
i don’t know y’all, but i can say it takes me a long-ass time to ask that many questions.
Obviously. But I have no context on how much my actions create co2 in the first place. I assume driving a car generates a majority of it, or maybe heating the house, but I still don’t have any clue how many kilograms that might be. But what I do know is how many kilowatts my house consumes electricity and at least roughly how much our appliances use, so if you want to try and blame me for consuming precious resources by generating text or watching a video at least give me an measurement I can easily comprehend.
Generally, heating and cooling are the main energy consumption for domestic purposes. next up is the car, and then electrical consumption. (from what i remember).
as long as you don’t take a transatlantic trip your fine:
“For example, having DeepSeek R1 (70 billion parameters) answer 600,000 questions would create CO2 emissions equal to a round-trip flight from London to New York. Meanwhile, Qwen 2.5 (72 billion parameters) can answer more than three times as many questions (about 1.9 million) with similar accuracy rates while generating the same emissions.”
i don’t know y’all, but i can say it takes me a long-ass time to ask that many questions.