Iterable<T>
└── Collection<T>
├── List<T>
│ ├── ArrayList
│ ├── LinkedList
│ ├── Vector (legacy)
│ │ └── Stack (legacy)
│ └── CopyOnWriteArrayList
├── Set<T>
│ ├── HashSet
│ │ └── LinkedHashSet
│ ├── TreeSet (SortedSet → NavigableSet)
│ ├── ConcurrentSkipListSet
│ └── CopyOnWriteArraySet
└── Queue<T>
├── LinkedList
├── PriorityQueue
├── ArrayDeque
└── BlockingQueue (interface)
├── LinkedBlockingQueue
├── ArrayBlockingQueue
└── PriorityBlockingQueue
Map<K,V> (does NOT extend Collection)
├── HashMap
│ └── LinkedHashMap
├── TreeMap (SortedMap → NavigableMap)
├── Hashtable (legacy)
│ └── Properties
├── WeakHashMap
├── IdentityHashMap
├── EnumMap
└── ConcurrentHashMap
| List | Set | Map | |
|---|---|---|---|
| Duplicates | Allowed | Not allowed | Keys unique, values allowed |
| Order | Insertion order maintained | Depends on impl | Depends on impl |
| Null | Allowed (multiple) | One null (HashSet) | One null key (HashMap) |
| Interface | List<E> | Set<E> | Map<K,V> |
| Access | Index (O(1) for ArrayList) | No index | By key |
| Extends | Collection | Collection | Nothing (standalone) |
| ArrayList | LinkedList | |
|---|---|---|
| Internal | Dynamic array | Doubly-linked list |
Random access get(i) | O(1) | O(n) |
| Insert/delete at middle | O(n) – shift elements | O(1) – just relink nodes |
| Insert/delete at end | O(1) amortized | O(1) |
| Insert/delete at beginning | O(n) | O(1) |
| Memory | Compact (array) | More overhead (node objects) |
| Implements | List | List, Deque, Queue |
| Cache performance | Better (contiguous memory) | Poor (scattered nodes) |
Default choice: ArrayList. Use LinkedList only when frequent O(1) insertions/deletions at head/middle with iteration. In practice, ArrayList is almost always faster due to CPU cache effects even for some "LinkedList scenarios".
LinkedList excels as a Deque (double-ended queue):
LinkedList<String> deque = new LinkedList<>();
deque.addFirst("first"); // O(1)
deque.addLast("last"); // O(1)
deque.removeFirst(); // O(1)
deque.removeLast(); // O(1)
Use cases:
- Queue/Deque implementation: Where you frequently add/remove from both ends
- LRU Cache: Remove from tail, add to head (but
LinkedHashMapis usually better) - Stack with frequent push/pop at head
Prefer ArrayDeque over LinkedList for most queue/deque scenarios – ArrayDeque is faster (array-based, better cache behavior) and uses less memory.
Both are resizable arrays, but:
| Vector | ArrayList | |
|---|---|---|
| Thread safety | Synchronized (all methods) | Not synchronized |
| Performance | Slower (sync overhead) | Faster |
| Growth | Doubles by default | Grows by 50% |
| Legacy | Java 1.0 | Java 1.2 (Collections) |
| Recommendation | Legacy/avoid | Preferred |
Vector is legacy – use ArrayList + explicit synchronization (Collections.synchronizedList()) or CopyOnWriteArrayList for concurrent use.
Vector synchronizes every method at the object level, even when no concurrent access is happening. This is:
- Too coarse-grained: Even iteration requires locking every element access
- Not composable:
if (!v.isEmpty()) v.get(0)is still NOT thread-safe (race between isEmpty and get) - Performance penalty: Even single-threaded code pays sync cost
Modern alternatives:
ArrayListfor non-concurrentCopyOnWriteArrayListfor read-heavy concurrentCollections.synchronizedList(new ArrayList<>())for simple sync needConcurrentLinkedDequefor lock-free concurrent deque
HashMap uses an array of buckets (Node[]) with linked list (Java 7) or red-black tree (Java 8+ when chain length ≥ 8).
Array (buckets):
[0] -> null
[1] -> Node("Alice", 30) -> Node("Bob", 25) // collision - same bucket
[2] -> Node("Charlie", 35)
[3] -> null
...
[15] -> Node("Dave", 28)
put(key, value) flow:
key.hashCode()computed- Hash spread:
h ^ (h >>> 16)(reduces clustering) - Bucket index:
hash & (capacity - 1)(bitwise AND, fast when capacity is power of 2) - If bucket empty: insert new Node
- If bucket has entries: iterate/compare with
equals()
- Key found: update value
- Key not found: append (add to tree if chain ≥ 8)
get(key) flow:
- Compute hash → bucket index
- Traverse nodes in bucket comparing with
equals() - Return value or null
Treeification (Java 8): When a single bucket's linked list reaches 8 nodes AND capacity ≥ 64, converts to red-black tree → O(log n) vs O(n) for long chains.
Default capacity: 16 (always power of 2)
Default load factor: 0.75
Threshold = capacity × load factor. When entries exceed threshold, HashMap rehashes (creates new array of double size, re-inserts all entries).
Example: Default map with 16 capacity, threshold = 12. When 13th entry is added, resize to 32.
Why power of 2? hash & (n-1) is faster than hash % n and distributes evenly.
Why 0.75? Balance between time (fewer collisions, bigger array) and space (fewer rehashes, smaller array). Below 0.6 = too many empty buckets. Above 0.85 = too many collisions.
Pre-sizing: If you know approximate size, provide it:
Map<String, String> map = new HashMap<>(expectedSize / 0.75 + 1); // Or use Guava: Maps.newHashMapWithExpectedSize(expectedSize)
Collision: two keys hash to the same bucket.
Java 7: Separate chaining with linked list. New entries added to head (causes a subtle ABA problem in concurrent use that leads to infinite loops in multi-threaded HashMap – don't use HashMap from multiple threads!).
Java 8:
- Linked list for ≤ 8 nodes per bucket
- Red-black tree for > 8 nodes (if capacity ≥ 64)
- Tree degrades back to list when nodes ≤ 6
Why switch to tree? In worst case (all keys hash to same bucket), linked list = O(n). Tree = O(log n). Mitigates hash DoS attacks where attacker crafts many keys with same hash.
String's hashCode() is deterministic and used to be exploitable – Java uses random hash seed (-XX:+UseStringDeduplication) to mitigate.
| HashMap | Hashtable | |
|---|---|---|
| Thread safety | Not synchronized | All methods synchronized |
| Null keys | One null key allowed | No null keys |
| Null values | Allowed | Not allowed |
| Performance | Faster | Slower |
| Legacy | No | Yes (Java 1.0) |
| Iterator | Fail-fast | Enumerator (old API) |
| Extends | AbstractMap | Dictionary (ancient) |
Hashtable is legacy – use ConcurrentHashMap for thread-safe map (much better than Hashtable).
| HashMap | ConcurrentHashMap | |
|---|---|---|
| Thread safety | No | Yes |
| Null keys | Yes | No |
| Locking | None | Segment-level (Java 7) / CAS + bucket-level (Java 8) |
| Performance (single-thread) | Faster | Slightly slower |
| Performance (multi-thread) | Dangerous | Designed for it |
| Concurrent reads | Unsafe (may see inconsistent state) | Safe, non-blocking |
| Iterator | Fail-fast | Weakly consistent |
Java 8 ConcurrentHashMap internals: No segment locks. Uses CAS for most operations. Only locks individual buckets during tree restructuring. size() uses LongAdder-like approach.
Key methods:
putIfAbsent(k, v): Atomic put-if-missingcomputeIfAbsent(k, fn): Compute and store if missingmerge(k, v, fn): Atomic mergecompute(k, fn): Atomic compute
| HashMap | TreeMap | |
|---|---|---|
| Internal structure | Hash table | Red-black tree |
| Ordering | No guaranteed order | Sorted by key |
| Performance | O(1) average | O(log n) |
| Null keys | One allowed | No null keys (comparison) |
| Navigation | No | floorKey, ceilingKey, subMap, headMap, tailMap |
| Implements | Map | SortedMap, NavigableMap |
Use TreeMap when:
- Need keys in sorted order
- Need range queries: "all entries with key between 100 and 200"
- Need floor/ceiling key operations
TreeMap<Integer, String> scores = new TreeMap<>(); scores.subMap(70, 100); // entries with keys 70-99 scores.floorKey(85); // highest key ≤ 85 scores.ceilingKey(70); // lowest key ≥ 70 scores.headMap(50); // keys < 50
LinkedHashMap maintains insertion order (or access order).
// Insertion order (default)
Map<String, Integer> map = new LinkedHashMap<>();
map.put("banana", 2); map.put("apple", 1); map.put("cherry", 3);
// Iterates: banana, apple, cherry
// Access order (LRU behavior)
Map<String, Integer> lru = new LinkedHashMap<>(16, 0.75f, true) {
@Override protected boolean removeEldestEntry(Map.Entry<String, Integer> eldest) {
return size() > MAX_CACHE_SIZE; // evict oldest access
}
};
Use cases:
- Preserve insertion order in map iteration
- LRU Cache (access-order mode + removeEldestEntry)
- Ordered JSON/config properties
- Event logging where insertion order matters
Conceptual mismatch:
Collection<E>stores elementsMap<K,V>stores key-value pairs
If Map extended Collection, what would E be? Map.Entry<K,V>? But then:
add(Map.Entry<K,V>)doesn't make sense vsput(K, V)contains(Object)– contains key? value? entry?remove(Object)– remove by key? value?size()– conflicts in contract
The operations and semantics are fundamentally different. Map provides entrySet(), keySet(), values() which return Collection views.
Josh Bloch (Java Collections designer): "Maps are not proper Collections because the element concept doesn't fit the Map abstraction."
HashSet is backed by a HashMap (values are a dummy PRESENT object):
public class HashSet<E> {
private transient HashMap<E, Object> map;
private static final Object PRESENT = new Object();
public boolean add(E e) { return map.put(e, PRESENT) == null; }
public boolean contains(Object o) { return map.containsKey(o); }
}
Uniqueness enforced via hashCode() and equals():
- Compute
hashCode()→ find bucket - Compare with
equals()against each element in bucket - If match found: not added (duplicate)
- If no match: added
This is why overriding equals() without hashCode() in Set elements causes duplicates:
class Point { int x, y; }
// Without hashCode override:
Set<Point> set = new HashSet<>();
set.add(new Point(1,1));
set.add(new Point(1,1)); // Different hashCode! Different bucket! Both added!
| HashSet | TreeSet | |
|---|---|---|
| Order | No order | Sorted (natural/comparator) |
| Performance | O(1) add/remove/contains | O(log n) |
| Null | Allows one null | No null (can't compare) |
| Backed by | HashMap | TreeMap |
| Interface | Set | SortedSet, NavigableSet |
| Use case | Fast lookup, no order needed | Sorted unique elements, range queries |
// TreeSet range operations TreeSet<Integer> set = new TreeSet<>(Set.of(1, 5, 3, 7, 9)); set.headSet(5); // [1, 3] - elements < 5 set.tailSet(5); // [5, 7, 9] - elements >= 5 set.subSet(3, 7); // [3, 5] - elements >= 3 and < 7 set.floor(6); // 5 - greatest element <= 6 set.ceiling(6); // 7 - smallest element >= 6
TreeSet maintains a red-black tree (self-balancing BST). Every add/remove/contains requires:
- Traversal from root to correct position: O(log n)
- Potentially rebalancing the tree: O(log n) rotations
HashSet's hash-based approach achieves O(1) average case (single bucket lookup if no collisions).
For n=1,000,000: HashSet find = 1 hash + ~1 comparison; TreeSet find = ~20 comparisons (log₂(1,000,000) ≈ 20).
Comparable<T> is an interface for defining natural ordering of a class.
public interface Comparable<T> {
int compareTo(T other);
}
- Returns negative:
this < other - Returns 0:
this == other - Returns positive:
this > other
public class Employee implements Comparable<Employee> {
private String name;
private double salary;
@Override
public int compareTo(Employee other) {
return Double.compare(this.salary, other.salary); // sort by salary
}
}
Used by TreeSet, TreeMap, Collections.sort(), Arrays.sort() when no explicit Comparator provided.
Contract: Must be consistent with equals. If compareTo() returns 0, equals() should return true.
| Comparable | Comparator | |
|---|---|---|
| Package | java.lang | java.util |
| Method | compareTo(T) | compare(T, T) |
| Defined by | The class itself | External class or lambda |
| Orders | Natural/default | Multiple/custom |
| Modifies | Source class | No modification |
| Use | Sort by natural order | Sort by custom order |
// Comparable - in the class
class Person implements Comparable<Person> {
public int compareTo(Person other) { return this.age - other.age; }
}
// Comparator - external, multiple options
Comparator<Person> byName = Comparator.comparing(Person::getName);
Comparator<Person> byAgeDesc = Comparator.comparingInt(Person::getAge).reversed();
Comparator<Person> complex = Comparator.comparing(Person::getLastName)
.thenComparing(Person::getFirstName)
.thenComparingInt(Person::getAge);
List<Employee> employees = ...;
// By salary descending
employees.sort(Comparator.comparingDouble(Employee::getSalary).reversed());
// Multiple criteria
employees.sort(Comparator.comparing(Employee::getDepartment)
.thenComparing(Employee::getName)
.thenComparingDouble(Employee::getSalary));
// Null-safe
employees.sort(Comparator.comparing(Employee::getManager,
Comparator.nullsLast(Comparator.naturalOrder())));
// Stream
employees.stream()
.sorted(Comparator.comparing(Employee::getHireDate))
.collect(Collectors.toList());
Fail-fast iterators (ArrayList, HashMap, HashSet): Throw ConcurrentModificationException if collection is structurally modified during iteration (not through the iterator).
Mechanism: modCount counter tracks structural modifications. Iterator checks modCount on each next() call.
List<String> list = new ArrayList<>(List.of("a", "b", "c"));
for (String s : list) {
if (s.equals("b")) list.remove(s); // ConcurrentModificationException!
}
// Fix: list.removeIf(s -> s.equals("b"));
// Or: use iterator.remove()
Fail-safe iterators (CopyOnWriteArrayList, ConcurrentHashMap): Iterate over a snapshot or weakly-consistent view; no exception thrown, but may not reflect latest changes.
CopyOnWriteArrayList<String> cowList = new CopyOnWriteArrayList<>(List.of("a", "b"));
for (String s : cowList) {
cowList.add("new"); // no exception, iterator sees original snapshot
}
| Iterator | ListIterator | |
|---|---|---|
| Direction | Forward only | Both forward and backward |
| Interfaces | Iterator<E> | ListIterator<E> extends Iterator |
| Available for | All Collections | Lists only |
| Methods | hasNext(), next(), remove() | + hasPrevious(), previous(), add(), set(), nextIndex(), previousIndex() |
List<String> list = new ArrayList<>(List.of("a", "b", "c"));
ListIterator<String> it = list.listIterator();
while (it.hasNext()) {
String s = it.next();
it.set(s.toUpperCase()); // replace current element
}
while (it.hasPrevious()) {
System.out.println(it.previous()); // C, B, A
}
Calling list.remove() during for-each throws ConcurrentModificationException. Iterator's remove() is the safe way to remove during iteration.
Iterator<String> it = list.iterator();
while (it.hasNext()) {
String s = it.next();
if (condition(s)) it.remove(); // safe - updates modCount
}
it.remove() removes the last element returned by next(). Must call next() first. Can only call once per next().
Modern alternative (Java 8+):
list.removeIf(s -> condition(s)); // cleaner
| Iterator | Stream | |
|---|---|---|
| Paradigm | Imperative | Functional/declarative |
| Operation type | Sequential | Sequential or parallel |
| Reusability | Can reset | Single-use |
| Lazy | No | Yes (intermediate ops) |
| Modification | Supports remove | Immutable source |
| Short-circuit | Manual | Built-in (findFirst, limit) |
| Parallel | No | Yes (parallelStream()) |
| Chaining | No | Yes (fluent API) |
Use Iterator when: simple sequential traversal with possible removal. Use Stream when: transformations, filtering, aggregations, parallel processing.
Spliterator (Splittable Iterator) supports parallel traversal of a data source.
public interface Spliterator<T> {
boolean tryAdvance(Consumer<? super T> action); // process one element
Spliterator<T> trySplit(); // split for parallel processing
long estimateSize(); // rough count
int characteristics(); // ORDERED, SORTED, SIZED, DISTINCT, IMMUTABLE, etc.
}
Stream API uses Spliterators internally for parallel streams:
trySplit()recursively splits the Spliterator- Each split processed by different threads in ForkJoinPool
- Results combined
Custom Spliterators: Implement for custom data structures to support parallel stream processing.
Queue: FIFO – insert at tail, remove from head.
Queue<String> queue = new LinkedList<>();
queue.offer("a"); // add to tail
queue.poll(); // remove from head
queue.peek(); // look at head without removing
Deque (Double-Ended Queue): Insert/remove at both ends.
Deque<String> deque = new ArrayDeque<>();
deque.offerFirst("a"); // add to front
deque.offerLast("b"); // add to back
deque.pollFirst(); // remove from front
deque.pollLast(); // remove from back
ArrayDeque is preferred over LinkedList for queue/deque use cases (faster, less memory).
ArrayDeque as stack: push() = addFirst(), pop() = removeFirst() – faster than Stack.
PriorityQueue uses a min-heap (binary heap stored in array). Smallest element (by natural order or Comparator) is always at the head.
PriorityQueue<Integer> pq = new PriorityQueue<>(); pq.offer(5); pq.offer(1); pq.offer(3); pq.poll(); // returns 1 (minimum)
Internal structure: Array where element at index i has children at 2i+1 and 2i+2, parent at (i-1)/2.
Operations:
offer(e): Add at end, sift up – O(log n)poll(): Remove root (min), replace with last, sift down – O(log n)peek(): Return root – O(1)remove(o): Find and remove arbitrary element – O(n) to find
Not sorted (heap order, not sorted order). Iteration is NOT in priority order; only poll() guarantees order.
Max-heap:
PriorityQueue<Integer> maxHeap = new PriorityQueue<>(Comparator.reverseOrder());
BlockingQueue extends Queue with blocking operations for producer-consumer patterns:
BlockingQueue<Task> queue = new LinkedBlockingQueue<>(100); // capacity 100 // Producer queue.put(task); // blocks if full queue.offer(task, 5, TimeUnit.SECONDS); // blocks up to 5s // Consumer Task task = queue.take(); // blocks if empty Task task = queue.poll(5, TimeUnit.SECONDS); // blocks up to 5s
Implementations:
LinkedBlockingQueue: Unbounded (or capacity-limited), linked nodesArrayBlockingQueue: Fixed capacity, array-backed, fair/unfair optionPriorityBlockingQueue: Priority ordered, unboundedDelayQueue: Elements available after a delaySynchronousQueue: No storage; direct handoff between producer and consumerLinkedTransferQueue: More efficient for some producer-consumer scenarios
Used in: Thread pool's work queue (Executors.newFixedThreadPool uses LinkedBlockingQueue), message passing between threads.
| Regular | Concurrent |
|---|---|
| ArrayList | CopyOnWriteArrayList |
| HashSet | ConcurrentSkipListSet |
| HashMap | ConcurrentHashMap |
| TreeMap | ConcurrentSkipListMap |
| LinkedList | ConcurrentLinkedDeque |
| PriorityQueue | PriorityBlockingQueue |
ConcurrentSkipList*: Lock-free, sorted, O(log n) operations. Alternative to synchronized TreeMap/TreeSet.
ConcurrentLinkedDeque: Lock-free, non-blocking concurrent deque. High throughput.
CopyOnWriteArrayList: On every write (add, set, remove), creates a new copy of the entire array. Reads are non-blocking (use snapshot of array at time of iterator creation).
CopyOnWriteArrayList<String> list = new CopyOnWriteArrayList<>(); // Read: No locking, uses current array reference // Write: Creates new array, copies, modifies, switches reference atomically
Best for: Read-heavy, write-rare scenarios (observer lists, listener registries, routing tables).
Terrible for: High write frequency (copies entire array on each write – expensive for large lists).
Trade-offs:
- Reads: Very fast, no locking
- Writes: Slow (O(n) copy), expensive
- Memory: May hold multiple array versions during GC
- Iteration: Safe (snapshot), won't see concurrent modifications
WeakHashMap: Keys are held with weak references. If no strong reference to a key exists elsewhere, GC can collect the key and the entry is automatically removed.
WeakHashMap<Object, String> cache = new WeakHashMap<>(); Object key = new Object(); cache.put(key, "value"); key = null; // Remove strong reference // After GC, the entry may be removed from the map System.gc(); cache.size(); // may be 0
Use cases:
- Caches where values should be collected when key is no longer used
- Metadata/attributes attached to objects (without preventing GC)
- Canonical mapping: canonicalize instances (no duplicate instances of same logical object)
Pitfall: If key is a string literal (interned, always strongly reachable), it's never collected. WeakHashMap with String keys often doesn't work as expected.
IdentityHashMap: Uses == (reference equality) instead of .equals() for key comparison, and System.identityHashCode() instead of hashCode().
IdentityHashMap<String, Integer> map = new IdentityHashMap<>();
String a = new String("key");
String b = new String("key");
map.put(a, 1);
map.put(b, 2);
map.size(); // 2 - different objects, same content but different identity
Use cases:
- Object graph traversal: Detect cycles using object identity (not equality)
- Serialization: Track already-serialized objects
- Deep clone: Track already-copied objects
- Proxy systems: Map by object identity
EnumMap<K extends Enum<K>, V>: HashMap optimized for enum keys using a compact array internally.
enum Day { MON, TUE, WED, THU, FRI, SAT, SUN }
EnumMap<Day, String> schedule = new EnumMap<>(Day.class);
schedule.put(Day.MON, "Team meeting");
schedule.put(Day.FRI, "Code review");
Advantages over HashMap with enum keys:
- Faster: Array indexed by enum ordinal – O(1) with no hash computation or collision
- Memory-efficient: Compact array representation
- Ordered: Iterates in enum declaration order
- No null keys (enum values can't be null)
Always prefer EnumMap over HashMap when keys are enums.
Key principles:
- Pre-size collections:
new ArrayList<>(expectedSize)ornew HashMap<>(capacity/0.75) - ArrayList vs LinkedList: ArrayList almost always wins in practice (cache efficiency)
- HashSet for contains(): O(1) vs ArrayList's O(n)
- Avoid Iterator wrapping: Direct array iteration > stream > iterator for simple cases
- Primitive collections: Use
IntStream, or libraries like Eclipse Collections, Trove for primitive lists (avoid boxing) - Immutable collections:
List.of()(Java 9) – more memory-efficient than ArrayList with no-modification - EnumMap/EnumSet: For enum-keyed maps and sets – very fast
| Operation | ArrayList | LinkedList | HashMap | TreeMap | HashSet | TreeSet |
|---|---|---|---|---|---|---|
| Add | O(1) amort | O(1) | O(1) avg | O(log n) | O(1) avg | O(log n) |
| Remove | O(n) | O(1)* | O(1) avg | O(log n) | O(1) avg | O(log n) |
| Get/Contains | O(1) idx / O(n) value | O(n) | O(1) avg | O(log n) | O(1) avg | O(log n) |
| Iteration | O(n) | O(n) | O(capacity) | O(n) | O(capacity) | O(n) |
*LinkedList remove is O(n) if you need to find the element first; O(1) with iterator.
HashMap worst case (all collisions): O(n). After Java 8 treeification: O(log n) worst case.
- Use concurrent collections (best):
Map<K,V> map = new ConcurrentHashMap<>(); List<E> list = new CopyOnWriteArrayList<>();
- Collections.synchronizedXxx() (synchronized wrapper):
List<String> syncList = Collections.synchronizedList(new ArrayList<>());
Map<K,V> syncMap = Collections.synchronizedMap(new HashMap<>());
// Must synchronize on collection for iteration:
synchronized (syncList) { for (String s : syncList) {} }
- Manual synchronization:
synchronizedblock orReentrantLock
- Immutability:
List.of(),Collections.unmodifiableList()– no sync needed for reads
Choose based on use pattern:
- Read-heavy:
CopyOnWriteArrayList/Collections.unmodifiableList() - Write-heavy, all operations:
Collections.synchronizedList()+ external lock on iteration - Concurrent read+write:
ConcurrentHashMap,ConcurrentSkipListMap - Queue for producer-consumer:
BlockingQueueimplementations - Immutable:
List.of(),Map.of()(Java 9+) – zero synchronization needed
java.util.Collections: Static methods for Collection operations:
sort(),reverse(),shuffle(),rotate()min(),max(),frequency(),disjoint()unmodifiableList(),synchronizedList()singletonList(),emptyList(),nCopies()binarySearch()(on sorted list)fill(),copy(),swap()
java.util.Arrays: Static methods for array operations:
sort(),parallelSort()binarySearch()copyOf(),copyOfRange()fill()equals(),deepEquals()toString(),deepToString()asList()– convert array to fixed-size List (backed by array, no add/remove)stream()– create stream from array
Any hash-based collection (HashMap, HashSet, HashTable) uses both:
hashCode()to find the bucketequals()to confirm the key match
Breaking the contract causes:
- Missed lookups: equal objects land in different buckets (not found)
- Duplicate insertions: Two "equal" objects in same Set
- Memory leaks: Keys added but never removable
class BadKey {
int value;
@Override public boolean equals(Object o) { ... } // overrides equals
// NO hashCode override!
}
Set<BadKey> set = new HashSet<>();
BadKey k1 = new BadKey(5);
set.add(k1);
set.contains(new BadKey(5)); // FALSE! Different hashCode → different bucket
- Mutable key whose hashCode changes after insertion:
List<String> key = new ArrayList<>(List.of("a"));
map.put(key, "value");
key.add("b"); // hashCode changes! Entry now in wrong bucket!
// map.get(key) returns null even though key is in map
// Old entry is leaked - impossible to reach without iterating all buckets
- Keys not removed:
staticHashMap accumulating entries indefinitely
- ThreadLocal HashMap: If thread pool threads survive but application context changes,
ThreadLocal+HashMapcan leak class loaders in web app redeployment scenarios.
- Inner class in static context: Anonymous inner class holding reference to outer instance.
ArrayList: Index-based, cursor variable incremented:
// Simplified Iterator for ArrayList
int cursor = 0;
public boolean hasNext() { return cursor < size; }
public E next() {
checkForModification(); // modCount check
return elementData[cursor++];
}
HashMap: Iterates through buckets array, then through linked list/tree nodes within each bucket:
// Visits all capacity buckets even if empty (why iteration is O(capacity))
for (int i = 0; i < table.length; i++) {
Node<K,V> node = table[i];
while (node != null) { visit(node); node = node.next; }
}
LinkedList: Follows next pointers from head.
A structural modification is any operation that changes the size of a collection: add, remove, clear.
Modifying an element's value (via set()) is NOT a structural modification.
Why it matters: Fail-fast iterators track structural modifications via modCount. Any structural modification during iteration (except via the iterator itself) causes ConcurrentModificationException.
list.set(0, "new value"); // NOT structural, safe during iteration (via index, not iterator)
list.add("item"); // structural, throws if done during fail-fast iteration
Immutable: Cannot be modified (contents AND structure). Throw UnsupportedOperationException.
Java 9+ factory methods:
List<String> list = List.of("a", "b", "c"); // immutable
Set<String> set = Set.of("x", "y"); // immutable
Map<String, Integer> map = Map.of("a", 1, "b", 2); // immutable
Properties:
- No null elements/keys
set(),add(),remove()throwUnsupportedOperationException- Compact memory representation (no backing array overhead)
- Iteration order: List – insertion; Set/Map – unspecified
Guava: ImmutableList.of(), ImmutableMap.of(), ImmutableSet.of() – similar but with richer API.
Unmodifiable (Collections.unmodifiableList()): A view wrapping the original. Throws exception on mutation, but the underlying collection can still be mutated by direct reference.
List<String> original = new ArrayList<>(List.of("a", "b"));
List<String> unmod = Collections.unmodifiableList(original);
unmod.add("c"); // UnsupportedOperationException
original.add("c"); // OK! unmod now shows "a", "b", "c"
Immutable (List.of()): No underlying mutable collection. Truly frozen.
List<String> immutable = List.of("a", "b");
// No way to modify this - period
Use List.of() when you have control over creation. Use Collections.unmodifiableList() to expose a view of a mutable collection.
Deep vs shallow immutability: Both are shallowly immutable – can't add/remove elements, but the elements themselves can be mutated if they're mutable objects.
// List
List<String> list = List.of("a", "b", "c");
List<String> copy = List.copyOf(existingCollection);
// Set
Set<String> set = Set.of("x", "y", "z");
Set<String> copy = Set.copyOf(existingCollection);
// Map
Map<String, Integer> map = Map.of("a", 1, "b", 2); // up to 10 entries
Map<String, Integer> map = Map.ofEntries(
Map.entry("a", 1),
Map.entry("b", 2),
Map.entry("c", 3)
// no limit
);
Map<String, Integer> copy = Map.copyOf(existingMap);
Key characteristics vs old approaches:
- Throw
NullPointerExceptionfor null elements/keys/values - Throw
IllegalArgumentExceptionfor duplicate elements/keys (in Set/Map) - Unspecified iteration order (Set, Map)
- More memory-efficient than
Arrays.asList()+ Collections wrappers - Serializable