In this paper,we suggest an adaptive watermarking method to improve both transparence and robustness of quantization index modulation(QIM) scheme. Instead of a fixed quantization step-size,we apply a step-size adapted to image content in each 8×8 block to make a balance of robust extraction and transparent embedding.The modified step-size is determined by contrast masking thresholds of Watson’s perceptual model.From a normalized crossed-correlation value between the original watermark and the detected watermark,we could observe that our method is robust to attacks of additive white Gaussian noise(AWGN),Salt and Pepper noise and Joint Photographic Experts Group(JPEG) compression than the original QIM.By taking into account the contrast insensitivity and visible thresholds of human visual system,the suggested improvement achieves a maximum embedding strength and an appropriate quantization step-size which is consistent with local values of a host signal.
In this paper, we suggest an adaptive watermarking method to improve both transparence and robustness of quantization index modulation (QIM) scheme. Instead of a fixed quantization step-size, we apply a step-size adapted to image content in each 8 x 8 block to make a balance of robust extraction and transparent embedding. The modified step-size is determined by contrast masking thresholds of Watson's perceptual model. From a normalized crossed-correlation value between the original watermark and the detected watermark, we could observe that our method is robust to attacks of additive white Gaussian noise (AWGN), Salt and Pepper noise and Joint Photographic Experts Group (JPEG) compression than the original QIM. By taking into account the contrast insensitivity and visible thresholds of human visual system, the suggested improvement achieves a maximum embedding strength and an appropriate quantization step-size which is consistent with local values of a host signal.