Brian & Alec,
Thank you for your thoughtful analysis. Let me please share a bit of background, and ask your assistance if I may.
When we updated the pricing, we needed to do so to match our actual COGS for low-volume projects. (Because of some things in our back-end implementation, it is much, much less expensive for us to operate high volume projects than low volume projects.)
We are committed to delivering highly-favorable consumption-centric IaaS-style pricing for event data flow, and we also need to run a viable and healthy business. This was at the core of the changes. As we do more and more refinement of our implementation over time, I expect that we will pass savings on to you.
As you have pointed out, the simplicity of the currently-published event pricing model does not work for high-volume usage. You laid this out quite well, although I have one small ask below.
Since those prices were published, we have been working closely with several very high-volume enterprise customers to refine a very simple discount schedule based on event volume that is both sensitive to our COGS and also which matches the volume deals we have actually been signing.
We will be revising the pricing pages to show these simple discount schedules soon, we are very happy to share our work-in-progress with you privately to get your feedback before things are locked. Please do drop a note here if youāre interested in doing so, because I suspect that youāll find them quite good.
I do have one ask of you, if I may. Your methodology of starting with 500MB and dividing things down into an āevent countā by just using 100 bytes per message is certainly one way to do simple math, but respectfully it doesnāt match the reality of how weāve seen customers deploy the product. To take your method to the extreme, if a customer has an existing LoRa application that sends 10-byte payloads a few times every day (as LoRa applications tend to do), they arenāt going to consume 500MB in 10 years. Should they be redesigning their app to send 10 messages per minute, rather than a few times a day, just to consume the 500MB?
We chose to include 500MB and the event-based pricing model to reflect the fact that both of them work super-well for the vast majority of narrowband use cases, which tend to send relatively small amounts of data occasionally, conserving power between those transmissions. There are certainly many high-bandwidth or high-rate or streaming applications where different pricing models (and different infrastructure architectures) may be appropriate.
The example you laid out would send a message about every 5m, continuously for 10 years. This certainly isnāt unreasonable in any way, but obviously this would be connected to the cell network constantly and obviously it isnāt going to be battery powered.
What would be incredibly helpful for us is to understand more details about the use case. The nature of the device, how it is powered, why it needs to report every 5m, the needed latency from an event happening until it appears in your cloud, etc. The better we understand use cases, the more we can make sure that our architecture and our pricing is best tuned for reality.
Can you please share this info privately with our team?
Thank you for your patience with us; weāre listening and learning as we grow.
Ray