If it could use a hashset, it would not be a list anymore. A hashset does not allow duplicates, but the list does allow them; also, list supports certain item order and access by index, which is O(1) is is the fastest. (You can combine those two features by, say, using a hashset of lists, where each list represents all duplicates, but it hardly makes much practical sense :-).)
Now, it's not clear the
time computational complexity of what problem do you mean. If you want to remove the element by its value, the operation will first need the search of the item by its value; and the search itself will be of O(N). This is because the node should first be found; and the only way to find it is the test the items sequentially, until the item is found.
But if you already have the preexisting knowledge of the index of the item to be removed, this is the different story, and will depend on a particular implementation. According to
this specification, this should be a
doubly-linked list. In this case, the removal time complexity should be of O(1).
If the list implementation is based on the array, the removal will still depend on the size of the list, because the elements following the one being removed should be all relocated to fill the gap produced.
—SA