Edit: oh, now I think I see why @CarlosMougan said no. You said
...start the same GridsearchCV with the same parameter and just change...
If you mean use the optimal values for all hyperparameters except n_estimators and now search only on that one hyperparameter, then Carlos is right, and for the right reason. Below, I interpreted your suggestion as searching over the whole space again, except with new range for n_estimators.
I don't see any reason that you can't do this. You might want to fix the cv-splits ahead of time and use the same ones for both runs of the grid search, too keep the comparisons completely fair. (In sklearn, this means passing cv as either one of their CV generators or as an iterable.)
This approach makes sense particularly in case
you want to examine some results right away, so dump some smaller grid to look at while running the next grid. (This sort of matches your case, where run times(?) are high.)
you expected the first grid to be all, but find one hyperparameter always performs best at the edge of your grid, so now you want top extends its range.
Finally, please note that the number of trees in a random forest has little to do with performance; rather, more trees just stabilizes some of the randomness in the tree construction. So generally, you want to set it "high enough," while not so high that computation is needlessly long.