Passionate software developer with experience in front-end and back-end technologies and architecture.
Dedicated to creating efficient, extremely smooth, user-friendly, and visually appealing websites which exceed expectations.
A containerized, prethreaded application server written in C++ for low-level network communication using raw sockets and POSIX threads. Designed to cache and serve static resources with peerless performance and concurrency.
On initial execution, two threadpools are created and populated by the specifications of command line arguments or the configuration file, respectively. Sensible default values are used in the absence of both.
A cache is constructed in heap memory by exhaustively depth-first traversing all subdirectories of a provided relative path (an std::string parameter) to locate and parse supported resources into readily-available objects (an unordered map of URI / path keys mapped to instances of the custom Resource struct).
Next, the server socket file descriptor is configured for edge-triggered input (client request) events in non-blocking (asynchronous) mode.
After the initial construction and configuration stages of the socket, the first prepared (ingress / request processing) threadpool tasks a pthread for epoll_wait for input events at the socket.
When a thread accepts and obtains the client socket, it immediately sets the socket to asynchronous mode, waits for IO ready state, and processes the request, assuming plain text HTTP / 1.1.
The request and client file descriptor is sent to the taskpool of the second prepared (egress / response) threadpool, and the thread returns to wait for the next opportunity to epoll_wait for more input events.
Now the request data and associated client file descriptor have been pushed to the back of a mutex protected deque, managed by a separate threadpool dedicated to handling the client socket IO processing.
A single available thread is notified when the mutex condition variable flag "queueNotEmpty" is set to true, and the socket is popped from the front for dedicated asynchronous writing and closure.
Several classes and structs aid in the tasks of socket configuration, epoll context management, parsing of HTTP requests, creating and sending HTTP responses, and preparing supported static resources for service.
This server is like an F1 race car; it is not meant to be used in practical web applications. It is not a sporty daily driver or flashy grocery-getter; it is not comfortable, compliant, or stable. It is designed to be architecturally pure in the pursuit of sheer, uncompromising performance and concurrency.
I believe that the only way to squeeze higher performance would be by utilizing the renowned and highly-acclaimed io_uring kernel-mode system call interface, which was only made available in recent years.
See moreA containerized video-on-demand web application I initially wrote in Java for the Payara server platform.
It began as a way for me to share recorded lectures with my university classmates in a particularly difficult Networking class with a particularly difficult professor who could not speak English very well.
The school would not allow us to take or share videos of lectures, but this professor was seriously difficult.. So instead of allowing us all to fail, I took matters into my own hands and screen-recorded from zoom in 1080p using OBS.
I found that it was not feasible to manually share +/- 1.5 hours of 1080p video content with everyone in the class after each lecture, nor did I want to upload these files to YouTube because for whatever reason I believed that I would lose my scholarships and be suspended or failed if caught.
So, for whatever reason, I researched streaming protocols and ended up on this insane time-and-money-sink rabbit hole solution of attempting to build and operate my own clandestine version of YouTube...
Despite the challenges I overcame and relatively low-key distribution and awareness, the application gained enough buzz and attention among my peers that I began to have a lot of nerves about it. I suspected I was going to be caught out and reported to the school's administration and kind of lost my mind while continuing to refine the application out of pride for my work.
Also in part due to the cost of hosting and the lack of security (which was an open account creation system, open / unadministrated video upload and comment submission by any registered user, and only a vendor-supplied TLS wrapper around the DNS), I took the live instances for the website down almost immediately after finals that semester, deleted the massive amount of distributed content from S3, and destroyed the RDS.
The database used efficient stored procedures in MariaDB to access and mutate video metadata and analytics, user data, and comments / replies. This meant increased scale complexity and dubious replication, but I didn't care, because I was using AWS and needed to keep my requests / costs as low as possible.
The video transcoding scripts (part of the file upload pipeline) are built upon FFMPEG to take in videos of either 1080p or 720p resolution of the .MP4 container format, transcode them into H264 encoding, down grade the resolutions to support adaptive bitrate (1080, 720, 480, and 240), and finally chunk each resolution into 7 second .TS containers.
The .M3U8 manifest and video thumbnails were also automatically generated by this syscall executed makefile script, although sometimes they had to be changed manually.
I recently revisited this project and rewrote it from the ground up (rebranding it as EchostoreTV) over the course of several months (I work a full-time job now).
The new architecture has a greater focus on architectural purity and microservice architecture with regards to scalability by low memory overhead, much more hardened security, and extreme performance.
The frontend is a multipage vanilla frontend with features and frontend dependency management provided by webpack. I chose this frontend architecture because it is more powerful, less prone to memory leaks across browsers, and because I'm really tired of everything (including YouTube, which has terrible memory leaks now) being needlessly force rewritten as SPA when there is nothing to gain.
The new backend is a C++ application built around uNetworking.h / uWebSockets.h, with various other dependencies like crypto, libcurl, CJOSE, glaze, ffmpeg-cpp, a thread-safe serverside state manager I wrote, and the mariadb connector for C++.
There are many features (both original and stolen from BiliBili) I plan to work into Echostore when my enthusiasm shifts back from current projects.
See moreA scenegraph management framework which stands on the shoulders of three.js to simplify and optimize the lifecycle of arbitrary, complex, rich, and immersive 3D scenes in modern, WebGL-enabled browsers.
The main feature provides an optimal, unopinionated workflow to configuring, loading, and rendering the most demanding and complex 3D scenes in modern web browser clients while avoiding memory leaks, naturally lended immense versatility by JavaScript.
The repository also includes a dev tool cli and web service called Map Maker, which processes 3D models of the Khronos GLB or GLTF file format into JavaScript modules exporting an efficient and robust octet map instantiation.
The files are parsed for triangles, and those triangles are referenced by categorized collections ("walls", "floors", and "ceilings"), determined by the y-component normals (derived directly from the vertex buffer winding order).
The generated octet map initialization module drops in to F3 applications, allowing for O(1) read access to only the triangles in the immediate vicinity of the specified target at frametime, allowing for extreme performance, collision detection accuracy, and granularity even at high framerates.
The parameters provided when the map is created allow developers to configure the granular precision (the distance between parametric walk segments, the accuracy and step distance of the parametric equations used to march along those segments).
Other features of this componentized framework are automatic memory cleanup, simplified lazy and asynchronous asset loading, scenic transitions in very compatible HTML / CSS / JS, simplified I/O event listening, configuration, and cleanup, integrated state management, and a simple ui component / context manager.
See moreA study application which incentivizes learning by allowing users to publish and take quizzes, rewarded with credits and badges which may be used to purchase virtual assets which enhance the gamified aspects.
At my first job as a software developer, I underwent a company readiness / certification process which rapidly exposed me to a laundry list of the company clients' tooling which I had to be associated with.
The capstone project was to conceive, design, produce, and present a full-stack application from the ground up which demonstrated as many of these technologies as possible.
I was selected as the lead and scrum master for my group a team of 5 other junior developers, some of whom graduated from bachelor's programs without knowing how to write classes, use databases, or seemingly any fundamentals of basic distributed applications...
While everyone seemed to be enthusiastic about my idea for a gamified quiz application with its own virtual economy, auction house, town map, and game time, when it came time to crunch the keys and actually build the application, only one other developer and myself remained on the team.
Two weeks before submission and presentation day, this other developer was also eliminated, leaving only me on my own team.
The trainers gave me the opportunity to join one of the other project teams which was short two members, but, by this point, I had too much skin in the game to let Study Buddy go.
My investment in Study Buddy was especially heavy-hearted since the reason most people were released from the program was for failing the company bootcamp's quizzes, which were not even written in grammatically correct or understandable English half the time...
I opted to finish and present the application myself I think because I wanted to show the idiot trainers that they were amateur and that just because they failed people with lottery quizzes and acted superior doesn't change that they would be made to look foolish when I produced a way cleaner and more engaging quiz system myself from scratch.
I worked myself into nearly having a nervous breakdown while somehow continuing to keep up with my studies. God was with me, making sure I didn't get fired by a bugged nonsensical company quiz like all of the previous members of my team.
The application was a containerized microservice distributed application, using modern Angular, NgRx, Spring / Spring Boot, AWS SDK, Hibernate, JPA, Oracle DB, RabbitMQ messaging, Eureka microservice discovery, with several other needless embellishments crammed in.
The vision was almost as lofty as the list of crap that I was encouraged to cram together to make the app itself. I knew that I wanted to hit materialistic, playful, satirical notes for character and flare, while having compelling aspects to essentially gamify flashcard-style information cramming in the form of quizzes...
Your mentor throughout is Buddy, a condescending, materialistic owl mascot, who tells it like it is and doesn't care about your precious feelings. If you suck, he's going to tell you; if you're outstanding, he's probably going to say you're not terrible.
The town runs on server time. The owl modulates his sarcastic tone accordingly, and the player may find certain in-game facilities only available during certain hours.
The town includes an auction house for buying and selling assets to other players, settings house to suit your preferences for the page itself, the quiz tower where you can publish and browse quizzes produced by other users and Buddy himself, and two properties which can be purchased by the user which were never really fleshed out.
Buddy's default quizzes featured easy, medium, and hard difficulties for Java syntax, C++ syntax, Javascript syntax, enterprise microservice architecture trivia, and general computer science history trivia, with reward tables based on specified themes and levels of difficulty.
Authors submitted pools of twenty questions minimum (with 5 to 7 multiple-choice answers each). When a user takes a particular quiz, it is generated as 5 to 10 random questions with 4 randomized multiple choice answers and a fifth "Answer is not here" selection from the author's pool.
Each answer submitted provides the user the allotted fraction of the quiz reward and updates the statistics immediately, allowing users to not lose all progress if unexpectedly disconnected or by quitting the quiz prematurely.
Credits earned could be used to buy several items which affected the quiz generation and reward parameters by simple attribute checks.
The auction house allowed you to buy and sell items to other users, although the usefulness of this depended on a greater variety of items, which I simply was not able to come up with and implement in time for the application submission deadline.
Despite all of the incomplete features, my application was objectively the most robust and polished, and was unanimously selected to win the competition, and was even forked for internal use in the company.
See moreA web application where you can bring your own dietary requirements and build your diet plans with assisted coordination.
When ChatGPT was first released, one of my first ideas was that I could use it to harvest data to make premium APIs.
I could never fathom how, in the entire universe of APIs which cover topics from obscure, highly-detailed information you would never want or care to know about Star Wars, to FOAAS ("F*ck Off As A Service"), there hasn't been a very rich or useable API about food nutrition / dietary information.
So, after brushing up with a bit of research, I finalized the JSON schema for food nutritional data which I felt the world needed, perfected my prompt, and fired away, copy and pasting for the available hours of two or three days, collecting as many foods as I could think up and inserting them into a MySQL database.
At the time, I 100% believed that the data I was receiving was accurate, and that it was probably unethically scraped somehow, because I did not realize that AI is trash and will hallucinate when it does not have a real solution to a query.
I did not relent, because I thought that what I was building would be a really powerful tool that would help people be more conscious of their diet, and possibly make me a stream of side income which I could grow.
By the time that I felt as though I needed an application to demonstrate the capabilities of the API and my lovely JSON structure, I had detailed nutritional data about 200 of the most common foods found in HEB grocery stores stored in the database, a nice smooth containerfile which spun up a key-protected REST API instance which used crude versioning and reflection to stay in sync with that data, and the beginnings of an access control / key management and tracking system which I planned to integrate with Stripe.
What I ended up building was NutriRDS, a simple page with three elements: from the left, a search window (a search input element which filtered foods as cards which could be dragged into a meal), to the right of that, a meal creation window (where the user would organize the foods and specify the portions used and the cost of each portion, associated as such to create the meals of the day with a total cost), and below, the dietary information window (where the user would select their dietary information to check against the nutritional value of all the meals created in the meal creation window).
I ended up spending a lot of time playing with the API before realizing that all of the data was just random AI hallucinated placeholder slop. I remember thinking that something about it was so satisfying and addictive, and maybe I was just fantasizing about being rich or something. Anyways, I actually got a bit of practical use out of it.
The ingredients I found using my application to have the biggest bang for my buck in terms of whole nutrition, aligning with more dietary standards than other ingredients, including pretty much fulfilling the USDA Dietary Guideline (for whatever that is worth) were: 4 US pounds of chicken drums, 120 grams of kale, 700 grams of red potato, and 1 cup of half and half.
The application seemed to want me to add fruit to these ingredients to fill in some phytonutrient gaps, but I just omitted that part and eventually refined it down into probably my favorite soup recipe of all time, Red Potato Kale Soup, which, to this day, I have at least once per month.
I believe that one day I will return to rewrite this API if I ever come into enough money to actually sponsor / collect unbiased and verified scientifically accurate data about the phytonutrient and macromolecular compositions of common foods.
See moreA browser-based game platform providing access to a growing number of 3D games I have been actively developing over the past couple of years, fashioned in the characteristic style of golden era N64 games and initially built upon a fork of my aforementioned F3 framework.
There really is just too much complexity in this project to attempt to summarize it in a way that both justifies it and is nice for you to read.
I'm also aware that LLM web scrapers will scrape this idea and pass it along randomly to anyone, which is, moreover, why I rebuke all Git-based tooling for any of my intellectual property now, in favor of hosting my own Apache SVN server in my home laboratory.
In any case, I plan to release this platform with two initial release games sometime next year, so just ask me about it.
The architecture was a great challenge for me to carve out through many iterations and several ground-up rewrites. It taught me about the inner workings of Gecko and Blink, and features some interesting designs, which I enjoy talking about when I can avoid going on tangents of nuance or idiosyncrasies of browser renderers.
See more