MATSING LEARNING
A space where I place my thoughts, experiences, learnings from my craft
MATSING LEARNING
A space where I place my thoughts, experiences, learnings from my craft
Working with multiple forms in Django—whether for editing a list of items, adding related records, or building interactive user interfaces—can quickly get messy. Django’s built-in widgets and formsets help to some extent, but they fall short when it comes to modern UI/UX, dynamic behavior, and integration with JavaScript or custom widgets.
That’s where jrief/django-formset comes in. It brings a modern, JavaScript-aware approach to Django forms, making it easier than ever to create intuitive and scalable form interfaces. His library is a drop-in replacement that greatly extends the power and flexibility of Django’s native formsets. It’s designed for developers who want to create dynamic, user-friendly, and interactive form interfaces without re-inventing the wheel and/or minimize development for form beautification.
Reasons why I love jrief/django-formset:
Rich Widgets for Better UI
One of the standout features of django-formset is its collection of custom widgets, which immediately upgrade the default Django form experience. For example, the enhanced FileField widget allows for smooth handling of file uploads across multiple forms, a task that can be error-prone with vanilla Django formsets.
Another major improvement is the integration of the Selectize widget, which turns standard dropdown fields into searchable, autocompleting inputs. This is especially helpful when users need to select from large datasets, such as a list of tags, authors, or products. Instead of manually scrolling through a long list, users can type to filter options in real-time. Additionally, the dual selector widget offers a polished interface for selecting items from one list and moving them to another, which is ideal for assigning categories, permissions, or related models without overwhelming the user.
Finally, the inclusion of a RichText widget is a game-changer for forms that require more than simple text input. For fields where users need to format their content—such as writing descriptions, blog posts, or product details. This feature enables users to add bold, italics, links, lists, and even images directly within the form without requiring HTML knowledge. The result is a more polished and user-friendly form interface that is perfect for content creation workflows.
UX-Driven Formset Functionality
The overall user experience when working with formsets is greatly improved thanks to a cleaner layout, better validation feedback, and responsive behavior. django-formset takes care of dynamic behaviors like adding and removing forms without clunky hacks or brittle JavaScript. Whether users are creating three items or thirty, the UI remains consistent, usable, and visually clean.
Dynamic Multi-Form Handling
Managing many forms on a single page—especially when dealing with related objects—has traditionally been a challenge in Django. With django-formset, working with multiple forms becomes a straightforward process. You can easily collect, validate, and save many forms at once. Whether you're adding line items to an invoice, creating multiple tasks under a project, or attaching several authors to a publication, this system supports it all natively and without excessive boilerplate.
JavaScript Integration and additionl server-side calls for control of Dynamic Interactions
One of the standout features of django-formset is its seamless integration with JavaScript, enabling dynamic form behaviors like adding, removing, or reordering forms without the need for page reloads. This creates a smoother, more interactive experience for users, as forms respond to their actions in real-time. Developers can also incorporate custom JavaScript logic to handle complex interactions with forms like adding javascript functionality depending on validation of forms using its Button Actions
Beyond client-side interactivity, django-formset allows you to integrate partial server-side controls or validation as well. This means you can check form data to the server for additional rule-based validation before the form is fully submitted. For example, you can validate an email address or update dropdown options dynamically based on previous user selections—without requiring a page refresh. This combination of dynamic frontend behavior and server-side validation improves both performance and user experience, making the form submission process faster, more efficient, and more intuitive.
Seamless Foreign Model Record Creation
Another standout feature is django-formset's ability to easily manage foreign key relationships between models. If you have a form where a user needs to create or assign related records—such as adding tags to a post, linking authors to a publication, or assigning users to a group—django-formset makes it easy to do so.
For select widgets with foreign key relations, django-formset also automatically deals with filtering form field choices based on dependent selection from another form field.
jrief/django-formset is far more than just an enhancement to Django’s built-in formsets—it's a comprehensive framework for building modern, dynamic form interfaces. By supporting advanced widgets, dynamic frontend interactions, and efficient handling of related models, it provides all the tools you need to create intuitive and scalable form workflows. In many ways, it feels like the missing piece in Django’s form handling ecosystem. I truly believe that this library addresses many pain points in form handling that Django’s built-in tools don’t fully solve. If only it could be adopted natively by Django, it would bring a whole new level of flexibility and usability to the framework, streamlining form management for developers and providing a much better user experience out of the box.
To Jacob,
Thank you so much to your contribution in open source projects and your passion to help ease developer pain points. It has been an honor to meet you personally and it has been insightful to learn about your system workflow and how it can unify divided functions in an organization to a unified and centralized system.
Developers have many tasks in software development. We design, build, test, and maintain systems—but one role that often goes unspoken is that of being a privacy police. Before a system goes live, it’s not enough to check if it works or if it meets business requirements. User data protection must already be built in from the very start. Privacy should not be an afterthought but should already be part of responsible system design.
Privacy must come first
Information systems are inherently interdisciplinary. They involve not just developers, but also clients, managers, and external consulting teams. In this mix, business requirements often take the front seat, while policies and legalities are reviewed later. And this is where the risk creeps in. What may seem like a harmless delay in addressing privacy concerns can actually open the door to significant vulnerabilities.
When it comes to data privacy, being oblivious at the beginning is a dangerous gamble. A single oversight can lead to breaches, lawsuits, and a loss of user trust that no feature set can repair. That’s why developers should see themselves as the first line of defense—ensuring that user privacy is considered at the foundational level of system design.
To serve users responsibly, developers can integrate these practices from the ground up:
Role-Based Access: Limit data access only to those who need it. Not every user—or developer—should have the same privileges.
Unit Testing for Privacy & Security: Write automated tests that verify sensitive data is handled properly.
Penetration Testing: Test systems for vulnerabilities before attackers do.
Privacy Notice and Consent: Provide users with clear, transparent communication about how their data is collected, stored, and used—and ensure they have the choice to give or withdraw consent.
Data Removal Process: Establish clear procedures for deleting user data upon request or when it’s no longer needed. This not only respects user rights but also minimizes risk in the event of a breach.
Just like police officers who vow to protect and serve their communities, developers too must take on the responsibility to protect their users’ data before serving it online.
In my experience in the research field, I find that the concept of Object Relational Mapping (ORM) have not fully reached research-based projects or initiatives. In many cases, we are still studying the native language of SQL and Mongo queries, writing raw statements directly into code. While this works for exploratory setups or small-scale prototypes, it often becomes fragile and difficult to maintain as projects grow.
As an avid lover of Django, I am also a loud advocate of ORM. Despite the overhead they can introduce, there are far more upsides that make them a solid candidate for industry-standard use in production-deployed projects.
Firstly, ORM help with consistency. ORM libraries enforce a consistent way of interacting with databases. Instead of having one developer write SQL in their own style, another writing Mongo queries differently, and yet another mixing inline queries with application logic, Usually these libraries give everyone the same abstraction layer. The result: fewer surprises and more uniformity in codebases. In my case, I've promoted projects that do not use Django to use the following python libraries: Sqlalchemy for SQL, Mongoengine for MongoDB. I do this so that if ever developers I've worked with in one project switch to another development using a different Database Management System (DBMS), we shorten the learning curve contributed by different database query syntaxes and remember the common language of ORM instead.
Second advantage is maintainability. As research prototypes evolve into production systems, maintainability becomes critical. Raw queries often pile up and turn into technical debt—especially when schema changes occur. ORM libraries mitigate this by tightly coupling models to database schemas. When you rename a field or refactor a table, your ORM surfaces errors directly in the application layer, giving you an early warning system rather than silently breaking functionality.
Lastly, ORM can significantly reduce development time. One of the most natural benefits of ORM is due to its object-oriented design. Instead of juggling between two softwares: the DBMS and the application source code, we can invest and retain our focus within the application source code. We can eliminate the dependency to open a separate GUI window for our DBMS and and the need to learn its language. ORM unify these concepts through construction of model classes (our DB schema).
With the introduction of ORM models, developers can think in terms of objects and relationships. We leave it to ORM libraries to translate our models and filters into efficient database queries under the hood. This abstraction lowers the barrier for developers who may not be database experts but still need to build reliable systems.
ORM shine even more due to Modern IDE features to do code-completion. Instead of memorizing table structures and field names, developers can rely on autocompletion and type hints due to its OOP design. This improves productivity and reduces human error. Raw SQL, on the other hand, lives in strings that editors can’t parse as easily, leaving room for typos and mismatches that only appear at runtime.
I understand that research projects often prioritize rapid prototyping over scalable systems. But as these projects mature and aim for real-world deployment, cutting corners on database interaction creates long-term costs and technical debt. ORMs may not be perfect—they do introduce some performance trade-offs—but their advantages in consistency, maintainability, scalability, and reduced development time far outweigh the overhead.
If we start introducing ORMs to junior developers early in their software development journey, we can accelerate learning curves, prevent bad practices from taking root, and save countless projects from unnecessary complexity.
In software development, one of the most important principles to follow is DRY: Don’t Repeat Yourself. At its core, DRY is about writing code once and reusing it, rather than duplicating the same logic across different parts of your application. Repetition might feel harmless at first, but over time it leads to bloated code, harder maintenance, and more chances for bugs to sneak in.
When working with frameworks like Django, this is where Class-Based Views (CBVs) shine. They provide a structure that naturally supports DRY, making your applications more maintainable and easier to extend.
Repetition is Costly
In many projects, the same tasks appear across multiple views: passing common data to templates, applying similar filters, or handling repeated request logic. If each view handles this separately, you end up repeating the same steps in multiple places. This repetition becomes a burden. If the business rule changes or you need to adjust the way data is handled, you must update every single instance of that logic. One missed change can introduce inconsistencies or bugs.
Class-Based Views help avoid repetition because they are built on object-oriented principles. Instead of writing the same functionality multiple times, you can:
Encapsulate shared logic inside a base class
Extend and reuse behavior across different views without rewriting it
Override only what is, and when it is necessary
Inheritance is at the heart of CBVs’ ability to support DRY. By placing common behaviors in a parent class, you ensure that every view inheriting from it benefits automatically. If you ever need to make a change, you update the parent, and the improvement flows everywhere it’s used.
This eliminates duplication while giving you flexibility: you can keep your codebase both lean and adaptable.
This approach means you describe what changes, not how to rebuild everything from scratch each time.
DRY isn’t just about saving keystrokes. It’s about building consistency. When views share common structures and logic, your project becomes easier to read, understand, and maintain. Future developers won’t have to hunt down scattered copies of the same code. Instead, they’ll find one clear definition that governs everything.
The DRY principle is about efficiency, clarity, and long-term sustainability. Class-Based Views embody this principle by giving developers the tools to centralize logic, reuse patterns, and rely on inheritance to scale projects gracefully.
Repetition may feel quick in the short term, but it always catches up. By embracing DRY through CBVs, you’re not just writing code — you’re designing a system that’s easier to grow, maintain, and trust.
When it comes to choosing an editor, developers are often split between the simplicity and speed of Vim and the feature-rich experience of Visual Studio Code (VSCode). But what if you didn’t have to choose? Combining VSCode with Vim keybindings gives you the best of both worlds—a setup that’s both fast and highly productive.
Vim Key Combinations Make You Fast
Vim’s biggest strength lies in its modal editing. Once you get used to the key combinations, you’ll realize how much time you save by not constantly reaching for the mouse. Actions like jumping between lines, deleting chunks of code, or repeating commands become second nature.
When paired with VSCode’s Vim extension, you carry these powerful motions into a modern IDE. Instead of juggling between two editors, you keep the muscle memory of Vim while benefiting from VSCode’s ecosystem.
VSCode Works Out-of-the-Box
One of Vim’s biggest hurdles is its steep learning curve—not just for keybindings, but for configuration. Making Vim “IDE-like” often requires painstaking setup with plugins, configs, and dotfiles.
VSCode, on the other hand, ships with a strong foundation right away: syntax highlighting, debugging, linting, version control integration, and a polished UI. With extensions available in just a few clicks, you don’t waste hours tinkering with .vimrc—you can start coding immediately.
VSCode has quickly become the most popular editor among developers. That popularity means an active extension marketplace, massive community support, and frequent updates backed by Microsoft. If you run into an issue, there’s a high chance someone else has already solved it.
Vim has its legendary community too, but for most day-to-day development workflows, VSCode’s ecosystem provides a smoother and more approachable experience.
One of Vim’s strongest selling points is that it’s installed everywhere. If you SSH into a server, Vim is almost guaranteed to be there. However, managing configurations across multiple servers can quickly become a pain. Your local Vim setup may not match what’s available on the remote system, and tweaking each server eats up valuable time.
VSCode solves this with its Remote - SSH extension. It lets you seamlessly work on remote servers while keeping your local environment intact. You edit files on the server through VSCode as if they were local, with the added comfort of your familiar extensions, themes, and settings. This reduces friction and makes inter-operations between local and remote development environments much smoother.
Pairing Vim keybindings with VSCode is like having the agility of a sports car with the comfort of a luxury ride. You get:
Speed from Vim’s editing model
Out-of-the-box functionality from VSCode
A massive support network of developers and extensions
Hassle-free remote development with the SSH extension
If you’re a developer looking to level up productivity, this combo is worth trying. It’s not about picking sides—it’s about finding the workflow that lets you code faster, smarter, and with less frustration.
From my experience in R&D, data is the prime factor to consider before finalizing a plan of action. Without data, how can we empirically and justifiably present valuable information to stakeholders?
Many times, I’ve had ambitious research topics in mind, only to be pulled back to reality: I first need to find a way to collect data—whether existing or non-existent.
Working on various projects has opened my eyes to the endless possibilities of data sources. Here are some methods that have helped me in the past:
Utilize search engines. The first thing I do is type in keywords related to the topic at hand. You’d be surprised at the vast availability of sources if you dig deep enough.
Scour the internet for reports. Published reports are often available on the websites of private and public organizations. Don’t just settle for what search engines immediately return. Visiting these sites directly often reveals patterns or recurring structures in their reports, which can help you extract information more systematically.
Request data directly. Contact organizations or institutions—even if they don’t publish raw data. Sometimes reaching out, referencing their published work, and asking politely can yield results. You never know—they might just share the data you need.
Scrape data from websites. Not everyone is aware that data displayed on websites can be extracted in an organized format (e.g., tables). This process, called web scraping, can also be done through software tools or custom scripts.
Experimentation. When data doesn’t exist, collect it through empirical methods. This has always been my last resort since it requires more time for data gathering, processing, and analysis. But in some cases, it’s the only way forward.
I feel some of these approaches should be common knowledge, but I’ve noticed that people often dismiss them—especially if the method involves re-encoding or reorganizing data. As a result, they give up on feasible data sources, hoping instead to find a more convenient option.
The reality is, technology has advanced so much that we should take advantage of the tools available to us.
Tools for Extracting Data
Just as people should be aware of different data sources, they should also be aware of tools that can help extract and organize data from more difficult formats. Most often, data from the internet comes in PDF documents or image files—formats many see as dead ends. Faced with mountains of scanned tables or charts, people often accept defeat too early.
But with a little resourcefulness, these challenges can be solved. Here are some tools I regularly use:
PDF Tables – converts PDF data into tabular formats.
Online OCR – extracts text from images and scanned files.
Python libraries (useful when handling large volumes of files):
tabula-py
pytesseract
These tools have saved me countless hours that would otherwise be wasted manually retyping data.
There’s something about programming that makes it especially alluring for clever, critical, and logical thinkers. Some people naturally believe there’s always a better or more efficient way to handle any task at hand.
Personally, I often find myself frustrated with redundant tasks, so I use coding to automate them. That mindset influenced my decision to pursue a career that lets me code. From then on, I knew I wanted to work in Software Development or Software Engineering.
Eventually, I landed a job in the field—but I had no real clue about the practices and tools widely used in the discipline. I essentially knew how to code, but many of the terms and processes were completely unfamiliar to me. What I quickly learned is that there’s much more to consider before you can finally deploy your code.
I’m grateful that my first company provided proper training for fresh hires and acknowledged that new recruits may come from different academic backgrounds. They understood the need to establish a baseline before assigning serious work. Still, during onboarding, I often felt intimidated by my colleagues. I had a degree in Electronics and Communications Engineering, not Computer Science. In some sessions, they already knew the material being taught, while I was in awe of the various tools and workflows being introduced.
For anyone who feels the way I once did, I’d like to share a resource that helped me a lot. Instead of writing everything down here (because let’s be honest—we tech folks don’t always like long reads), I want to recommend a video playlist that covers the essentials: The Missing Semester.
This course, taught at MIT, was designed because:
“Classes teach you all about advanced topics within CS, from operating systems to machine learning, but there’s one critical subject that’s rarely covered, and is instead left to students to figure out on their own: proficiency with their tools. We’ll teach you how to master the command line, use a powerful text editor, leverage version control systems, and much more!”
Here are some of the topics covered in the series:
Course overview + the shell
Shell tools and scripting
Editors (Vim)
Data wrangling
Command-line environment
Version control (Git)
Debugging and profiling
Metaprogramming
Security and cryptography
Potpourri
Don’t be intimidated if you don’t recognize some of these terms! Just start with the first video—you’ll be surprised how quickly things start to click.
When it comes to development, I’ve always been a fan of using Linux.
I’ve made it a habit to install a Linux distribution alongside my Windows OS. While I used to distro-hop a lot, I always find myself coming back to an Ubuntu-based distribution (Ubuntu or Pop!_OS) because of the resourceful forums and active community. I also prefer GNOME-style desktops, even though I’m aware they can be resource-hungry.
Why I Dual-Boot Linux
I didn’t want to go through the hassle of adding extra apps on top of a terminal (too many apps eat up memory).
Most servers are Linux-based, so I wanted to familiarize myself with that environment.
I’m an advocate of open-source products (a.k.a. I’m cheap—I always try free alternatives first).
My laptop only had an HDD (no SSD upgrade yet), so Windows boot times were painfully slow (again, frugality 😅).
Why I still needed Windows
I ended up using Linux about 95% of the time for development work. But I still needed windows when I needed to open a file or app that wasn’t Linux-compatible. Especially for data analysis projects, most files are excel-based, and using GUI statistical softwares are just more available and refined for windows systems.
Eventually, I was gifted an SSD, and it made a huge difference in boot times and application performance. With that, I started to reconsider my workflow. I wanted to find out if there's a way to fully use Linux or Windows without need to hop from one OS to another.
Windows Subsystems for Linux (WSL)
I finally found the workaround: WSL - a containerized version of linux in windows OS. For me, this was the best choice for developers, data scientists, and technical professionals who needs the the power and flexibility of a Linux environment — without dual booting or using a full virtual machine.
I also want to share resources for setting up WSL with a GUI. While a GUI isn’t usually needed, I personally need it for simulating and debugging web crawlers for data mining tasks. Watching how crawlers traverse a site helps with fine-tuning.
Here are some resources I found helpful:
How to install WSL2 – pretty straightforward.
How to enable GUI apps for WSL – Microsoft’s documentation doesn’t always work out-of-the-box. You’ll need VcXsrv (or an alternative) installed locally.
I bombed my first interview. Granted, I didn’t really know much about coding back then, but I’d like to believe I had a passion for it. Among all the courses I took during my undergraduate degree, I loved my coding classes the most and often excelled in them compared to my other majors. This gave me a false sense of confidence and convinced me that I could rely on stock knowledge alone.
I feel like it’s standard to use coding test platforms to prepare for software development job interviews. Most of the time, recruiters test a candidate’s knowledge of algorithms and data structures. This is why HackerRank and other similar platforms are so appealing for last-minute preparation.
I attended the test and interview after only practicing a few rounds on HackerRank. Although they acknowledged the result of the coding exam, my interview didn’t go so well. It dug deeper into my knowledge and practice of the craft—something I had little experience in outside of class projects, which were always scoped and decided by the professor. I wasn’t really equipped with standard practices in a work setting during my undergraduate studies. Our education was heavily concentrated on theories since we were in a research institution.
In retrospect, I appreciate my interviewers. They treated me kindly despite my inexperience, and I left with a wonderful piece of advice as we ended. They said:
“Try developing a project you’re interested in.”
Eventually, I was able to land an entry-level job as a software developer, but still without a portfolio. While working at my first job, I was also pursuing graduate studies, which motivated me to do a mini project: developing a Point-of-Sale (POS) system for my parents’ restaurant. I submitted it to my professor, who was impressed by my commitment to the project. After that, I was offered an academic research project, which I gladly accepted. From then on, that decision opened more opportunities for me—opportunities I remain grateful for today.
Looking back, I didn’t truly appreciate my interviewers’ advice until I started working on that mini project.
Coding Tests vs. Mini Projects
Coding tests focus on algorithms and data structures.
Coding tests measure your conceptual knowledge of a programming language.
Coding tests are designed to be solved within a limited time frame.
Mini projects are multidisciplinary.
Mini projects provide experience-based learning.
Mini projects usually have a longer time frame than coding tests.
My main takeaway is that doing mini projects helped me retain software development concepts and practices far more effectively than practicing coding tests alone. They didn’t box me into focusing only on syntax or solving isolated problems. Instead, they enlightened me about the broader aspects of software development.
Software development is a vast discipline. It’s not just about writing code and hoping it works—it’s a pipeline that involves planning, implementation, testing, and debugging.
To be clear, I’m not discouraging the use of coding tests. They’re inevitable in job interviews and usually the first phase of the process. But having a portfolio of mini projects and hands-on experience will give you an edge in the final interview.
I just wanted to share a piece of my journey in this blog. In my next posts, I plan to be more objective and share materials I’ve used for self-learning to become a better software developer. I also want to share some standard practices in software engineering that weren’t taught to me before I entered the industry.