In the context of ensembling, the aim of both bagging and pasting is to get a diverse set of estimators despite each estimator using the same algorithm.
The diversity comes from how you set up the data for individual estimators. Both bagging and pasting randomly select samples for each estimator, meaning each estimator's data will be different.
When selecting samples for an estimator in the ensemble, bagging and pasting both start off with the entire dataset. Bagging selects a sample, and it is allowed to re-select the sample if it randomly comes up again, meaning that you end up with some duplicated samples. Pasting stipulates a constraint: once you've selected a sample for this estimator, the sample cannot be selected a second time.
As an example, if the entire dataset is [10, 20, 30, 40, 50] and you want each estimator trained on 80% of the dataset (4 samples), then a bagged subset could be [10, 10, 10, 20] (duplication is permitted), whereas a pasted subset could be [10, 20, 30, 40] (no sample duplication).
Since bagging duplicates samples, there's more potential for subsets to be different across estimators. Pasting, on the other hand, constrains the samples you can choose from, so pasted subsets end up looking more similar between estimators.
The diversity introduced by bagging means that it is less representative of the original data, and therefore will have higher bias. But the diversity also means that the individual estimators are less similar, which results in better generalisation performance to new data (lower variance).
Similarly, since pasted subsets don't duplicate samples, the individual estimators have a better idea of the original dataset, and therefore they score more highly on the training data (lower bias). By following the original dataset more closely, the estimators in a pasted ensemble are more correlated. Ensembles hinge on and exploit estimator diversity, which puts pasting at a disadvantage and is why it might not generalise as well (higher variance).
Bagging generally results in better models, but you'd need to run CV to see whether this holds for your particular scenario. It's probably not worth tuning this aspect until you've assessed more pertinent model parameters.