Projects

Some of the key projects where I have been involved in envisioning, building and improving products and services.

Throughout my career, I've been involved in multiple projects. Some of them have been products that helped general consumers, while others had users that possessed specific skill-sets. Many of these products were also organisation's internal capabilities that enabled developers in leveraging standard and compliant pipelines. There have also been products that were used by external developers. Apart from requiring different technical solutions, they also required a diverse mindset on engaging with customers/employees, measuring success (retention, CSAT, Monthly Active users, error rates, etc).

3D Consumer Products

image reference: https://www.autodesk.com/in/solutions/123d-apps

This groups together my involvement in TinkerCAD, 123D (Design, Make, Catch), Platforms - WebCAD (spun out from TinkerCAD) and Creative Platform (3D shape customizers).

123D was a set of browser-based applications that extended 3d modeling, editing and creation capabilities to end-users. This was a bold new initiative that aimed at empowering hobbyists, students and general people with creating digitial 3d solutions without requiring any existing knowledge of 3d modeling software. I was a software developer on this team, and was responsible for building the core framework that was utilised by all the 123D family of products since I was the only person with advanced JS experience at that time.

image reference: https://commons.wikimedia.org/wiki/File:Logo-tinkercad-wordmark.svg

TinkerCAD was a similar browser-based product that was built by a different organisation and acquired by Autodesk. TinkerCAD was different from some of the 12D applications since it mainly used boolean-based mesh computation instead of a full-fledged solid modeling kernel that the 123D applications preferred. I was a part of the team that took over TinkerCAD after its acquisition and worked on merging the common 123D offerings into TinkerCAD.

TinkerCAD and 123D combined had more than 1M users (and growing). The biggest challenge was to scale the servers to handle the growing load while also handling the vibrant community that kept expecting more features 😄. TinkerCAD had a bunch of computational servers written in Go, running on a linode stack. The Go stuff was excellent, but the cloud infrastructure was not. We migrated things to AWS. Docker was in beta around that time, but we started experimenting with running the computational operations within containers running on our self-hosted EC2s. Adding a cache for common computational operations helped reduce the costs further. Eventually, we moved everything to a self-hosted K8s stack, and I know that this was moved to EKS after I moved away from the engagement. We had hit 10M users around when I started working on the next project. Currently TinkerCAD has more than 70M users.

This was also around the time that I started taking on people management responsibilities along with my software engineering responsibilities.

Project ThunderStorm (Internal)

Thunderstorm was the codename for a solution that enabled server-side execution of Dynamo (see https://dynamobim.org/) nodes. Dynamo is a visual programming tool that works with CAD and 3D applications and provides a low-code/no-code environment for building automated workflows. Most of these workflows execute locally, but Thunderstorm enables server-side execution of these nodes.

Each Dynamo script is a representation of the node execution graph, and Thunderstorm builds an optimised execution strategy based on operations that can execute in parallel. Long-running nodes are executed on its specified cloud infrastructure (windows/linux containers or lambdas or in some cases pre-configured VMs.

I was a part of the team that worked on the execution engine as well as a UI for visualising and debugging the execution graph. Some of this was integrated into Dynamo as a part of its Dynamo Player.

Project Anvil & CloudOS (Internal)

Project Anvil was a solution built to support our then upcoming generative design platform. The generative design servers were based on existing technologies that needed in-memory state. This meant that common container based solutions would not work. Anvil was built using Go, Kubernetes and Docker, and it allowed participating services to discover and communicate with each other via a contract-based protocol buffer over gRPC. Execution was auto-retried in case the containers were terminated due to resource constraints, and would manage context when required. There were multiple technical challenges such as limited support for kubernetes, which was still in beta.

The service (and UI) that we stood up had around 200 internal services (apart from the planned services) onboarded due to the ease of use of this platform. We built a robust automated testing system, and also had a playground that allowed service owners to execute their services via scripts in the playground. We also had strong internal customer support.

Anvil was eventually integrated into the wider enterprise-wide CloudOS as a part of its compute offering. CloudOS V2 (now V3) is a set of standard pipelines for onboarding, deploying and promoting microservices and other common cloud components. This provides automatic integration with monitoring, compliance, and structured logging for participating services. My team built support for server-less (AWS Lambda, Step functions) within CloudOS V2, and improved terraform based IaC.

Autodesk Customer Environment (Internal)

I led a small but dynamic team which was tasked with building a critical service that would provide a "space" for each enterprise account within the organisation to logically organize and manage all the cloud-based data for that enterprise account. The main challenge here was the impact this would have on customers since this was tied directly to our licensing and subscription services, and this new service would be a way for enterprise customers to manage their own per-user subscription. Despite the complex collaboration efforts and multiple redundancies in place to ensure that customers would not face situations where they don’t get access to the systems, we were able to deliver this on time and make a large impact in automating subscription entitlement & fulfillment.

The service was a simple REST API service written in Spring/Java and backed by Aurora RDS. The key challenges were integrating with the existing stack of licensing and fulfillment services, which were also in the process of migrating to new offerings. This meant integration with tibco, salesforce, bim360 and other internal services to ensure that the right workflow was executed.

Platform services (Internal)

I led the engineering discipline for all Pune based teams within my division. This included teams that worked on CloudOS V2, Observability - where we solved major problems that plagued the organisation with our on-prem splunk, and built a new SLO/SLI service + dashboard which soon became a mandatory requirement for all tier-0 and tier-1 services (our SVP had the SLO/SLI dashboard set up on a large wall at the headquarters). The SLO/SLI service is a key component for all existing and future cloud services, and it is integrated into all standard pipelines.

Another team worked on our cloud platforms’ ServiceNow customisations. These were complex customisations made by people who had left the organisation, and it was something that was very broken, causing multiple deployments to fail unexpectedly. I also volunteered to take on an architect role on this team since there was a gap. It took some time, but we were finally able to get our NPS to 0 from a negative 30. I finally handed this over to an enterprise team after getting it to a stable state.

Autodesk Data Exchange

image reference: https://help.autodesk.com/view/DATAEXCHANGE/ENU/

I was tasked with building a new interoperability solution for Autodesk. We had a history at Autodesk of not being able to get a viable solution in the hands of our customers and so I was provided a small team of 5-6 people to start with. This was challenging because I had to ramp up a team to work on the platform (API) as well as the connectors for the data exchange, while keeping with our goals. The goals were important because we were already behind in comparison with what competitors were doing. I was able to hire about 30 people to help grow and mature the product, platform and our teams.

Data exchange is a platform and a product for data interoperability across various CAD-based products, with more focus towards AEC (Architecture, Engineering and Construction) software. The products are around 10+ connectors which are current either in public beta or general availability, and built on top of a .NET SDK that allows third-party developers to build their own connectors. We also have a GraphQL endpoint for developers, and SaaS based integrations to Data Exchange (PowerBI, Power Automate)

With Data Exchanges, we've moved from the "early development" phase to the "scaling and resiliency" phase in a short time of ~ 20 months. The key challenge has been in managing multiple development streams, most driven via Systems Integrators, all while defining key data models.

The Data Exchange service is composed of multiple microservice and cloud infrastructure components that process data operations to build change-based versioning. Most of the service is written in Spring/Java, and the primary database is Dynamodb, with change data capture used to translate data changes to build materialised views and update search indices.

The client connectors are built using the .NET SDK and are mainly written in C#, with a Tauri-based UI component.