From a data management, data engineering, and data analytics perspective, basic EDA will force you to slice and group your data with liked data types. Creating a situation where you will be forced to do basic data management and data engineering to treat data quality and integrity issues before doing advanced work on it. For example, low quality data is data that is incomplete, poorly encoded, or simply hard to use as-is.
From an statistics and artificial intelligence and machine learning (AIML) perspective, data is represented in different form factors, which means you cannot do clustering if you have, for example, a mix of continuous and discrete values or numerical and categorical data.
From data retrieval and natural language processing (NLP) perspective, EDA helps you see patterns too. It helps you create a baseline for whatever problem you're trying to solve or investigate.
Overall. For EDA to work (for you to drive the experience), you have to ideate and visualize your objective first. Running data into python (sklearn, PyTorch etc) or R (Caret, dplyr etc) without a goal, scope, and/or purpose will give you some values (result), but you will not be driving the experience. The python and R libraries other people wrote will. This will mean you will not be able to explain your EDA to your professor (if school project) or to your boss (if part of job project)