Replies: 2 comments
-
@lundejd This is awesome! I love this knowledge sharing! 😍 (1) Would you be open to writing up a blog post on the FinOps blog on Tech Community? Or on your own blog would also be good. We can write one and point to yours to drive more awareness and traffic your way 🙂 (2) Is there anything we could do to improve documentation to help make the upgrade process easier? (3) Beyond documentation, do you see any opportunities to optimize the merge process to make it less manual? Power BI doesn't offer many APIs, which makes this challenging, but we could ship a ZIP file of the full Power BI project files that might make this easier. Or maybe there's a packaging approach that might help 🤔 Depends on how you're making your changes. (4) I would love to find a way to make some of what you're doing more configurable so you don't have to merge code and potentially allow you to do less so you can focus on other areas. I've probably heard every single one of those things from others, so generalizing this would be amazing. Are there any pieces of your solution that you feel might be easier to move into the toolkit to simplify your life and possibly make it easier for others to do the same? (5) When you mention Fabric, are you pointing Fabric to the hub storage account? I'd love to learn more about what you're doing here. We are also working on deeper Fabric integration, which might help. And as an open-source project, we'd love to collaborate with you on that, if it makes sense. Hubs will eventually include an allocation engine that will facilitate what you're doing. We haven't dug into the details, but we've definitely talked about doing allocation as part of data ingestion vs. joins later in the process. There are pros/cons to both that I won't get into here, but FYI we are thinking about this problem and do want to get there. Possibly more important than that might be an extensibility platform that we've also talked about to allow you to inject custom logic into the data ingestion pipeline. You're not doing that here, but I could see that being of value as costs grow. This would better facilitate you incorporating out of band changes that wouldn't be impacted by upgrades. I see this as a blocker for us hitting 1.0. Azure Gov support should be coming in Cost Management soon. You should be able to use remote hubs to bring that in. If needed, we could build a FOCUS converter, but I'm hesitant for us to invest in that given I know Cost Management is so close to shipping FOCUS support in Azure Gov. We're also talking about adding AWS, GCP, and OCI. There are a few orgs that have implemented these and we're in talks about getting those submitted as new PRs. In general, this is great to see. Thank you for sharing! Let us know what else we can do to improve the experience for you and your stakeholders. We're here to help make your job easier 🙂 |
Beta Was this translation helpful? Give feedback.
-
I'm definitely open to blogging about this. (where would I start?) Things are evolving quickly and our model is improving as we push DAX calculated columns and measures back into Power Query calculated columns as Fabric exposed our inefficient CPU usage. As for the documentation, I think it mentions any customizations requires more work but doesn't dive into details on how. We settled on copying from your latest .pbix files into our customized version. Right now we can't tell month to month what changed so we just copy everything then clean up the parts that no longer seem to be needed. Maybe a more detailed changeling for the Power BI files, though it's possible I've just missed it. I used this same process to re-create your CostDetails file as a base Dataflow in Fabric as well. There is a lot of opportunity to automate the merge process - I've made it as easy as possible using a Power BI chart that has the columns set up to match the database we use in our merge, but this won't be sustainable as the requests to split things like resource groups grows. Once we turn our focus away from getting Fabric set up to replace our large pbix files we'll have to turn back to this. I also had the thought that we can't be the only ones wanting these types of cost splits and I wonder if there is also a much better way that wouldn't lead to some of the issues we have when merging into the cost data. I have always thought (going back to blogging) if I could get the time to publish what we're doing the smart minds of the Internet would quickly come back with improvements/suggestions. Fabric was easy to set up but has been harder to optimize. With 12M+ lines of code each month and then our merges we found ourselves running out of resources quickly and we can only scale up so much to justify the use for internal bill back. Right now its basically:
(we also have FOCUS data flows for AWS, GCP, and the shared cost databases mentioned above that also feed into the same Semantic model. It would be fun to see you incorporate other clouds as well.) Because we have so much data, we have had to run step #3 and #5 month by month in "append" mode otherwise the job times out. A bit of a pain and we'll have to re-do this with any upgrade to FOCUS or the toolkit. Once all this is done we built a pipeline that first deletes the current month's data, runs #3 to refresh and append the current month then does the same for #5. Next month when bill-back day comes I'll have the pipeline run at night and the data should be ready in the morning for processing. This sounds like a lot of work but the benefits are huge, like sharing a somatic model with measures among the FinOps team, reducing the need for each person to do time consuming refreshes of their local .pbix file hoping we all end up with the same data, running in live mode with .pbix files that are 50kb in size vs. 5+gb, etc. I really think fabric is a game changer and can't wait to see how the toolkit evolves into it. |
Beta Was this translation helpful? Give feedback.
-
It was asked in the docs to "start a discussion" so I thought I'd finally answer. Keep in mind I don't expect any of what we are doing to make it into the toolkit as it is highly customized and not all that tough to upgrade.
To start, we use the CostSummary.pbix file in the toolkit then build off that. We don't make changes to the code, only additions after. (That being said every time there is an upgrade we're copy/pasting everything from the new CostSummary file into ours which does take time, especially to re-validate the numbers.)
That was the easy part. This year we have developed methods to:
2. Share Resource Group costs among our BillGroups by percentage.
3. Bill resources under a RG to a separate BillGroup.
4. Bill budget overages to a secondary BillGroup.
5. Split BillGroups into more granular cost centers.
All of these require their own database, 2 & 3 being joined to CostDetails each requiring a new set of columns for things like BilledCost and EffectiveCost (each building on each other). The others are done via relationships or "treatas" connections.
We don't use Microsoft's built-in functionality because they only work at the Cost level and we bill back actual RI usage with the amortization table. We show each BillGroup their regular cost, RI/SP consumption, anything impacted from 2-5, any markups, then their final billing period total. We also fold in Azure GOV costs (and pull AWS/GCP FOCUS data, though it is currently separate)
As complicated as it sounds it works to the penny. One person can run our monthly billing in under an hour and have it off to finance. We are now building all this in Fabric. (we have tried 3 other 3rd party tools but none can match our needs, not to mention our billing isn't on the calendar month which means we're maintaining a rather complex dates table as well)
I thought I'd share in case anyone is doing anything similar. Fun to hear what others are doing.
Beta Was this translation helpful? Give feedback.
All reactions