Skip to content Skip to sidebar Skip to footer

A Pattern With Periodic Openings Including a Continuous Intersection Partwafer

1 Introduction

Patterns are all around us and help us understand our visual world. In the 1990s, a human preattentive vision experiment[43] showed that periodicity is a crucial factor in high-level pattern perception. But most real patterns are not composed of perfectly periodic (tiled) motifs. Consider the commonly occurring real-world building facade scene in Figure 1 (a). While the windows are laid out periodically, they vary in their individual appearances. There are several design elements (borders, texture), shading variations, or obstructions (tree, car, street lamp) that are not periodic. These factors make it challenging to create a good computational representation for such "Near-Periodic Patterns" (NPP).

(a) (a) A NPP Scene
(b) (b) Input
(c) (c) BPI[22]
(d) (d) Ours
Figure 1: Inpainting to remove the tree, street lamp, and car from a near-periodic patterned (NPP) scene in (a). Input image (b) visualizes the mask (white unknown region) and detected (but inaccurate) NPP representation (yellow lattice). Guided by this inaccurate representation, the state of the art method BPI[22] (c) fails to generate windows occluded by the tree (orange arrow) and the white strip across the bottom (green arrow). Our NPP-Net (d) maintains global consistency and local variations, while preserving the known regions.

A good NPP representation must preserve both global consistency (similar motifs layout) and local variations (different appearances). For global consistency, the distance (periods) and orientations between adjacent motifs should be accurate (e.g., window layout). At the same time, the local details in the scene should be fully encoded (e.g., appearance variations in windows or the horizontal design strips). In this paper, we present a novel method to learn such a representation that can be used for applications such as image completion (our main focus), segmentation of periodic parts, and resolution enhanced scene remapping, e.g., transforming to a fronto-parallel view (see supplementary).

Existing image completion works applicable for NPP can be classified into two categories. The first category does not explicitly consider knowledge of periodicity. They complete images by training on large datasets

[35, 58, 21, 46, 57] or by exploring single image statistics[44, 50, 1]

. However, these methods fail to generate good global consistency, especially with a large unknown mask inside (interpolation) or outside (extrapolation) the image border, or severe perspective effect. The other category

[15, 23, 22, 31] models periodicity as prior for image completion. These works extract explicit NPP representations (e.g

., displacement vectors) and use them to guide image completion. These methods can generate good global periodic structure

if

the estimated periodicity is accurate. However, this is hard to achieve in the presence of strong local variations.

Our work is inspired by the recent progress on implicit neural representations[34]

that map image coordinates to RGB values using coordinate-based multi-layer perceptrons (MLP). But, naively using this method fails on our task due to the lack of a good periodicity prior. Thus, we present a periodicity-aware coordinate-based MLP to learn a continuous implicit neural representation, which we call NPP-Net for short. The key idea is to extract periodicity information from a partially observed NPP scene and inject it into both the MLP input and the loss function to help optimize NPP representation.

Three novel steps are proposed for the above idea: (1) The Periodicity Proposal step extracts periodicity in the form of a set of candidate periods and orientations that are used together to handle inaccurate detections; (2) The Periodicity-Aware Input Warping injects periodicity into the MLP input by warping input coordinates according to the proposed periodicities. This step preserves global consistency and the MLP converges to a good periodic pattern easily; (3) Finally, the Periodicity-Guided Patch Loss samples observable patches according to periodicity to optimize the representation. This step preserves local variations, improves extrapolation ability, and removes high-frequency artifacts.

Our approach only requires a single image for optimization. This is important since there are no large dedicated NPP datasets. Thus, we evaluate our approach on a total of 532 NPP subclasses chosen from three datasets[42, 9, 48]. The scenes include building facades, friezes, ground patterns, wallpapers, and Mondrian patterns that are tiled on one or more geometric planes and perspectively warped. Our dataset is larger than those (157 at most) used in previous works[15, 22, 23, 31] that are designed for NPP. We mainly apply NPP-Net for the image completion task, but extend it to resolution enhanced NPP remapping and NPP segmentation in the supplementary. We compare NPP-Net with four traditional [10, 1, 15, 22]

and five deep learning-based methods

[50, 44, 58, 57, 46], and eight variants of NPP-Net. Experiments show that NPP-Net can interpolate and extrapolate images, in-paint large and arbitrarily shaped regions, recover blurry regions when images are remapped, segment periodic and non-periodic regions, in planar and multi-planar scenes. Figure 1 shows the effectiveness of NPP-Net, inpainting a complex NPP scene, compared to the state of the art BPI [22]. While our method is not designed for general scenes, it is a useful tool to understand a large class of man-made scenes with near-periodic patterns.

2 Related Work

Near-Periodic Patterns Completion: There are two types of image completion methods that can be applied to NPP. The first type of methods do not explicitly consider periodicity as prior for completion[35, 58, 21, 46, 57, 51, 5, 63]. The second stream of methods takes advantage of periodicity to guide the completion. We focus on reviewing the second type of methods.

The first stage for these methods is to obtain an NPP representation to guide image completion. Existing methods aim to represent NPP by detecting the global periodicity despite local variations. The types of NPP arrangements vary[40, 41, 24, 15, 54, 37] but commonly, periodic patterns are assumed to form a 2D lattice[26, 25, 13, 20, 38, 36, 23]. The first lattice-based work[13]

for periodicity detection without human interaction finds correspondences using visual similarity and geometric consistency. Liu

et al.[25] improve this process by incorporating generalized PatchMatch[2] and Markov Random Field. Furthermore, Lettry et al.[20] detect a repeated pattern model by searching in the feature space of a pre-trained CNN. Recently, Li et al.[23]

design a compact strategy by searching on deep feature space without any implicit models. But it requires hyperparameter tuning to achieve competitive results. All existing methods describe periodicity using an explicit representation such as keypoints

[13, 41, 49, 25], feature-based motifs[40] or displacement vectors[20, 23]. But they do not preserve both global consistency and local variations well.

The second stage is to generate or inpaint an NPP image guided by the NPP representation[28, 27, 29, 15, 31, 23, 22, 12]. One common assumption is that the NPP lie on a single plane. Liu et al.[27] synthesize an NPP image through multi-model deformation fields given an input NPP patch and its representation. Mao et al.[31] propose GAN-based NPP generation. Huang et al.[15] and BPI[22] extend image completion to the multi-plane case. They detect periodicities in these planes[14, 30, 23] and use them to guide image completion, based on [53] and [1]. Unlike earlier work Huang et al., BPI uses the periodicity detection method based on feature maps extracted from a pre-trained network. Also, the state of the art BPI's image completion step does not use their prior GAN-based method[31].

In summary, the above methods assume that NPP representation is good enough for guidance, which is not guaranteed. By contrast, we merge the two stages by optimizing the implicit representation using image reconstruction error.

Implicit Neural Representations: Recently, coordinate-based multi-layer perceptron (MLP) has been used to obtain implicit neural representation (INR). It maps coordinates to various signals such as shapes[11, 8, 44], scenes[34, 32] and images[7, 4, 44]. Mildenhall et al.[34] represent a 3D scene from a sparse set of views for novel view synthesis. Siren[44]

replaces ReLU by a periodic activation function and designs an initialization scheme for modeling finer details. Chen

et al.[7] present a Local Implicit Image Function for the generation of arbitrary resolutions. Skorokhodov et al.[45] design a decoder based on INR with GAN training, for high-quality image generation.

NPP-Net differs from previous methods in two ways: (1) Directly using MLP[34, 44] fails to learn accurate NPP representation without high-level structural understanding. We propose a periodicity-aware MLP. (2) Many works require a large dataset for training, while we optimize on a single image.

3 NPP-Net

Figure 2: Initial pipeline of NPP-Net consists of three modules. (1) Periodicity-Aware Input Warping (pink) warps input coordinates using detected periodicity. (2) Coordinate-based MLP (blue) maps warped and input coordinate features to an RGB value. (3) Single Image Optimization (yellow) uses pixel loss and periodicity-guided patch loss on a single NPP image. Final pipeline in section 3.4 shows how multiple periodicities are automatically detected and utilized.

We aim to build an MLP that maps image coordinates to pixel values, given a partial observation of an NPP image. We will describe NPP-Net using the image completion task. The unknown (masked) region is completed (or inpainted) by training on the remainder of the NPP image. For clarity, we first describe the method for single planar NPP scene and pre-warp the image to be fronto-parallel [60]. Then we will extend NPP-Net to handle multi-planar scenes.

Our key idea is to extract periodicity information from the known NPP region and inject it into the MLP input and the loss function. The initial pipeline of NPP-Net (Figure2) consists of three modules: (1) Periodicity-Aware Input Warping transforms image coordinates using the detected periodicity. (2) Coordinate-based MLP maps the transformed coordinates to the corresponding RGB value. (3) Single Image Optimization provides a periodicity-guided loss function for optimizing the MLP on a single image.

3.1 Periodicity-Aware Input Warping

A traditional MLP is not good at capturing global periodic structure without additional priors. In fact, previous works[55, 64] have shown that a traditional MLP is unable to extrapolate a 1D periodic signal even with many training samples. The Periodicity-Aware Input Warping module thus explicitly injects periodicity information into the MLP by warping image coordinates .

Assuming a 2D lattice arrangement, the periodicity is represented as two displacement vectors and (orange arrows in Figure 2). A perfect infinite periodic pattern is invariant if shifted by . This representation can be transformed into periods and orientations, visualized as the magnitudes and orientations of the red arrows ( and , called periodicity vectors). Mathematically, the transformation is obtained by solving and , where the cross product is defined using the corresponding 3D vectors on the plane . A periodicity is then defined as a vector pair . Extension to circular patterns is available in supplementary.

One way to obtain the periodicity for an NPP image is to treat it as a learnable parameter and jointly optimize it with NPP-Net[6, 16]. However, this is hard for two reasons. (1) Good periodicity is not unique (any multiple works). (2) Many real-world NPP scenes contain strong local variations, leading to a complicated cost function. Thus we adopt an existing periodicity detection method[23] for input warping, which extracts feature maps from a pretrained CNN and performs brute force search to obtain a periodicity. Then, for each periodicity vector , we define a warp as a bivariate function:

This function generates a warped coordinate value sampled from a periodic pattern with period along direction , as shown in Figure 2. Through this feature engineering, the warped coordinates explicitly encode the periodicity information. The warped coordinates and , together with the original coordinates and , are further normalized to and passed through positional encoding[34] to allow the network to model high frequency signals[47]. The encoded coordinates are then input to the MLP. We keep the notations of coordinates before and after positional encoding the same for simplicity. The dimension of the features is , including frequencies in the positional encoding, and a set of four values for each frequency: and .

3.2 Coordinate-Based MLP

We adopt coordinate-based MLP to represent NPP images. It is more effective and compact than a CNN to model periodic signals since coordinates are naturally suited for encoding positional (periodic) information. Specifically, we input the warped coordinate features to enforce global consistency, and also input the original coordinate features without warping to help preserve local variations. The output of the MLP is an RGB value corresponding to the input image coordinate. Since ReLU activation function has been proven to be ineffective to extrapolate periodic signals[55], we use the more suitable SNAKE function[64].

3.3 Single Image Optimization

3.3.1 Pixel Loss:

Pixel loss is the most intuitive way to optimize coordinate-based MLP[34], which compares predicted and ground truth pixel values. For image coordinate , we adopt the robust loss function [3], given by:

where and are the output RGB values of the MLP and the ground truth RGB values from the input image at position , respectively. This loss is applied only to the known regions.

But merely adopting pixel loss like NeRF[34] fails to generate a good NPP for two reasons: (1) The high-dimensional input features result in the generation of high-frequency artifacts (Figure 3 (b)). See [47, 61] for details about this problem. (2) Pixel loss does not enforce explicit constraints to model the correlation between a coordinate's features and its neighbors. This constraint is critical for preserving local variations since it helps capture local patch statistics. Thus, pixel loss fails to preserve local variations. For example, in Figure 3 (b), pixel loss generates some periodic artifacts in the top non-periodic region 1 1 1 Figure 3 is generated based on the final pipeline explained in Section 3.4. .

3.3.2 Periodicity-Guided Patch Loss:

To address the limitations of pixel loss, we force the network to learn patch internal statistics by incorporating patch loss, which compares predicted and ground truth patches. The ground truth patches can be sampled at the same position as the predicted patch (for known regions), or sampled according to periodicity (for any region).

GT Patch at the Same Position: For a predicted patch in the known region, the ground truth patch at the same position is available. Specifically, for a square patch with size centered at position , we input all the pixel coordinates in the patch into MLP to obtain a predicted RGB patch . Let be the corresponding ground truth at the same position and be the mask of known pixels. We apply perceptual loss[59] on masked patches:

where is the element-wise product.

GT Patches Sampled Based on Periodicity: To train on unknown regions, we propose to sample ground truth patches based on periodicity. This is an effective way to handle the MLP extrapolation problem, which cannot be solved by merely using input warping and SNAKE activation function[64]. The input and output images in Figure 2 illustrate this sampling strategy.

Specifically, we sample multiple nearby ground truth patches for supervision by shifting position based on the estimated periodicity. The shifted patch center is defined as , where and are the displacement vectors. Because the predicted and ground truth patches are not necessarily aligned, we adopt contextual loss [33]:

where is a set of pairs corresponding to the nearest ground truth patches, since local variations are preserved using nearby patches for supervision.

Patch Loss: Our patch loss combines the two sampling strategies: , where and are constant weights. is a binary function: 1 for in known regions and 0 for in unknown regions.

(a) (a) Input

(b) (b) Pixel Only

(c) (c) Patch Only

(d) (d) Pixel+Rand

(e) (e) NPP-Net

(f) (f) GT
Figure 3: Comparing different losses based on the final pipeline. The red, yellow and cyan dots in (a) visualize the Top-3 periodicities. Zoom-ins of the unknown area (white rectangle) are in (b)-(f). Merely using pixel loss (b) generates high-frequency artifacts across the image and periodic artifacts in the top part. Adopting only patch loss (c) removes the artifacts but has poor global structure. Using pixel loss and patch loss with random sampling (d) cannot preserve global consistency and local variations well since the ground truth patches are not sampled according to periodicity and might be far from the predicted patch. With pixel loss and periodicity-guided patch loss, NPP-Net (e) solves these issues.

Total Loss: Our final loss is the combination of patch loss and pixel loss:

where and are constant weights. contains pixel coordinates that are randomly sampled in known areas, and contains the center coordinates sampled in both known and unknown areas in proportion.

Training with this loss preserves both global consistency and local variations, as shown in Figure 3 (e). In fact, only using patch loss cannot ensure global consistency if the detected periodicity is not accurate enough. In Figure 3 (c), the pattern structure is poorly reconstructed because it only focuses on the local structure. We also show the result for the combination of pixel loss and patch loss with patches that are randomly sampled in the known regions (we call it random sampling strategy) in Figure 3 (d). This fails to generate correct periodic patterns and good local details because the output and sampled patches have a large misalignment and are far away from each other.

3.4 Periodicity Proposal

Although the above initial pipeline shows good performance, it still fails to handle very inaccurate periodicity detection. To improve the robustness of NPP-Net, we design a Periodicity Proposal module to provide additional periodicity information. As shown in Figure 4, we first search multiple candidate periodicities and then augment the input to MLP to handle inaccurate periodicity detection.

Periodicity Searching: Our searching strategy is based on the same periodicity detection method[23] we adopt in the initial pipeline. But the authors' original implementation requires manual hyperparameter tuning. Instead, we design an automatic tuning method, which evaluates each candidate periodicity (obtained from various hyperparameters) in the context of image completion. Specifically, we first generate pseudo masks in the known regions and treat them as unknown masks for image completion. Then we execute the initial pipeline for each candidate periodicity, and compute its reconstruction error in pseudo mask regions for periodicity ranking. Since we focus on reconstructing a coarse global structure, we use a lightweight initial NPP-Net without patch loss for efficiency, which takes around 10 seconds for each periodicity in a Titan Xp GPU.

Figure 4: Final pipeline of NPP-Net modifies two modules of the initial pipeline. (1) Periodicity Proposal (green) automatically searches and augments the input periodicity to handle inaccurate periodicity detection and encourage the global consistency. (2) Coordinate-based MLP (lavender blue) has two branches: (a) for Top-1 periodicity and original coordinates, and (b) for the rest.

Periodicity Augmentation: Prior methods[15, 31, 23, 22] also use one periodicity to guide completion as in our initial pipeline. This cannot guarantee global consistency if the periodicity is inaccurate, especially when the unknown mask is large (see experiments). So, we augment the pattern periodicity at two levels to improve robustness. At the coarse level, instead of searching the best periodicity, we keep Top-K candidates , to cover multiple possible solutions. This coarse-level augmentation encourages NPP-Net to move towards the most reasonable candidate periodicity. At the fine level, we augment periodicities with small offsets to better handle smaller errors. Specifically, a periodicity vector is augmented to be . We empirically define (in pixels). Finally, we merge all the augmented periodicity vectors as . Note that .

In our final pipeline, contains periodicities. We perform input warping (Section 3.1) for each periodicity and input the transformed coordinate features into the MLP. We add an additional branch to the MLP, as shown in Figure 4. Since the Top-1 periodicity is likely the most accurate (see experiments), we input the coordinate features warped using the Top-1 periodicity (including fine-level augmentation) and original coordinate features, to the first branch. The coordinate features warped using the Top-2 to K periodicities are sent to the second branch. For optimization, we sample patches according to the Top-1 periodicity. All other parts remain the same in our final pipeline. We evaluate these changes in our ablation study. See supplementary for implementation details including hyperparameters, network architecture, and runtime.

3.5 Extensions

Non-NPP region segmentation: Parts of a scene may not be near-periodic (e.g., trees in front of a building facade). We thus segment the non-periodic regions in an NPP image in an unsupervised manner. We use a traditional segmentation method[17] to provide an initial guess for the non-periodic regions, treated as the unknown mask in image completion. After training NPP-Net, we relabel the initial non-periodic regions with low reconstruction error as periodic regions. Similar strategy is adopted to serve as a pre-filtering step before applying our method to any arbitrary scene (see supplementary).

NPP remapping: NPP scenes captured from a tilted angle can result in blurry motifs after rectification. To enhance resolution, we detect blurry regions and treat them as the unknown mask in image completion. The difference is that we compute the pixel loss in the blurry regions with a smaller weight.

Multi-Plane NPP completion: Given an image with different NPPs on different planes, we first adopt a pre-trained plane segmentation network[56] to obtain a coarse plane segmentation. Then we select a bounding box in each plane as a reference to rectify the plane using TILT[60]. Note that, we do not require accurate segmentation since it is only used for bounding box selection. For each rectified plane, we first use our NPP segmentation method to segment the non-periodic regions (mainly from other planes) and treat them as invalid pixels. Then we perform NPP completion on each plane, transform it back to the original image coordinate system, and recompose the image. Figure 6 shows qualitative results for this extension. Detailed implementation and experiments for these extensions are in supplementary.

4 Experiments

Dataset: We evaluate NPP-Net on 532 images selected from three relevant datasets for NPP completion: PSU Near-Regular Texture Database (NRTDB)[42], Describable Textures Dataset (DTD)[9], and Facade Dataset[48]. There are 165 NPP images in the NRTDB dataset including facades, friezes, bricks, fences, grounds, Mondrian images, wallpapers, and carpets. Similarly, there are 258 NPP images in the DTD Dataset including honeycombs, grids, meshes and dots. The Facade Dataset has 109 rectified images of facades. Some of these facades are strictly not NPP because often the windows are not arranged periodically. But nonetheless we include these to evaluate our approach when the NPP assumption is not strictly satisfied. Finally, we also collect a small dataset with 11 NPP images for real-world applications (e.g., removing trees in the scene). In general, scenes in NRTDB are more challenging than DTD since they contain more non-periodic regions (boundaries, trees, sky, etc.), complex illuminations and backgrounds, and multiple periodicities across an image (Figure 5 row 3). We use TILT[60] to rectify all the images to be fronto-parallel if needed. See supplementary for more details including sampled images and mask generation.

Metrics: No single metric can evaluate NPP image completion comprehensively. So we adopt three metrics to cover different scales, including LPIPS (perceptual distance)[59], SSIM[52], and PSNR. Lower LPIPS, higher SSIM, and higher PSNR mean better performance. A known limitation for SSIM and PSNR is that blurry images also tend to receive high scores in these metrics[19], while LPIPS handles this issue better. See supplementary for FID[39] and RMSE metrics.

Table 1: Comparison with baselines and NPP-Net variants for NPP completion and the metrics are evaluated only in unknown regions. The best and second-best results (excluding variants) are highlighted in bold and underline respectively. NPP-Net outperforms all the baselines on NRTDB and DTD. While Facade has some non-NPP images, NPP-Net can still outperform all other baselines except for Lama. See the supplementary for the results tested on the full images.

Ablation Study: We perform three studies. First, we compare to a "No Periodicity" variant, which uses a standard coordinated-based MLP without a periodicity prior. Results in Table 1 show that it fails to understand the arrangement of tiled motifs. Note that the Facade dataset may have different performances because it has some non-NPP images. Second, to study the loss functions, we design three variants: (1) Only pixel loss, (2) Only patch loss, (3) Pixel loss and patch loss with random patch sampling. As discussed in Section 3.3, the results in Table 1 and Figure 3 show that NPP-Net outperforms other variants.

Third, we study the effect of the periodicity augmentation (coarse level Top-K candidate periodicities and fine level offsets) by testing four variants: (1) Initial pipeline (No augmentation), (2) Top-1 with offsets, (3) Top-5 with offsets, (4) Top-3 without offsets. Table 1 shows that the initial pipeline performs the worst. Larger K (Top-5) hurts the performance as more inaccurate periodicities may be included. But, smaller K (Top-1) also performs badly because the correct periodicity may not be included. A suitable K (Top-3) without offsets performs worse since offsets can help better handle smaller errors. With the appropriate K and offsets, NPP-Net generates the best results. See more studies in supplementary.

Baselines: We compare against non-periodicity-guided and periodicity-guided baselines. For the former, we select two traditional methods, Image Quilting[10] and PatchMatch[1], that can handle pattern structure locally for some scenes with properly selected patch size. We then consider two learning-based methods DIP[50] (CNN-based) and Siren[44] (MLP-based) trained on a single image for inpainting. We also choose several learning-based methods: PEN-Net[57], ProFill[58] and Lama[46] that are trained on large real-world datasets[62, 9, 17, 18] since they show competitive NPP completion examples in their work.

Figure 5: Qualitative results for NPP completion. We show four baselines that operate on a single image. The red, yellow, and cyan dots in input images show the first, second, and third periodicity from periodicity searching module, respectively. For visualization, all periods are scaled by 2. Our NPP-Net outperforms all baselines for global consistency (rows 1-6) and local variations (rows 6-9).

For periodicity-guided methods, we choose two baselines - Huang et al.[15] and BPI[22]. Both works were designed for multi-planar scenes, but can be used for single-plane completion as well. BPI first segments and rectifies planes, then performs periodicity detection[23] on each plane, and inpaints each plane independently. For fair comparison in single-plane NPP image, we only compare with BPI's completion step to remove potential inconsistency introduced from other steps (e.g., plane rectification). For Huanget al., we use their pipeline without modification as their method works directly for a single plane and the completion step cannot be easily separated out. We will also compare with these two methods in multi-plane NPP images.

Figure 6: Qualitative comparison for multi-plane NPP completion. We show three baselines, which are either designed for multi-plane NPP scenes (Huang et al.and BPI) or trained on large datasets (Lama). Some zoom-in boxes are resized for visualization. Full results are in supplementary.

Comparison with Baselines: Table 1 shows the quantitative results for all the methods. For NRTDB and DTD datasets, among the single-image baselines, BPI obtains the almost best LPIPS because it generates a more reasonable global structure guided by periodicity. PatchMatch obtains better SSIM and PSNR even if it generates blurred results for large masks. Lama achieves the best results among the baselines since it adopts fast Fourier convolution for a larger image receptive field, which allows it to implicitly learn the underlying periodicity from large datasets. Our NPP-Net outperforms all baselines on these two datasets by optimizing only on a single image. For the Facade dataset, even if some non-NPP images are included, NPP-Net performs better than the other baselines (except for Lama). Lama effectively learns scene prior from large datasets and thus works well for non-NPP images, leading to the best performance in this dataset.

Qualitative results are shown in Figure 5. Large rectangle masks are challenging since there is less information from which to estimate the representation. Perceptually, PatchMatch works well when the motifs are small (row 3) but results in blur with large masks (row 2 and 6). Although BPI and Huang et al. perform better than non-periodicity-guided baselines, they generate artifacts since the NPP representation (periodicity) has poor global consistency (row 1-6) or lacks local variations (row 6-9). Note that the Top-1 periodicity in row 1 (red dots) is inaccurate - the actual periods are half of the one shown. We show that NPP-Net can extrapolate NPP images well (row 5), generalize to irregular masks (row 6), and work for scenes that contain non-periodic regions (row 4). We show that NPP-Net can be extended to handle multi-planar scenes in Figure 6. Among the baselines, Lama (trained on large datasets) can better handle local variations (row 2 purple box). Although it captures some global structure when the mask is not large, Lama performs worse than BPI when the mask is outside the image border for extrapolation (all rows) and perspective effect is severe (row 2 cyan box). Learning from the Top-K periodicities, NPP-Net produces the best images, maintaining global consistency and local variations. Finally, the output of NPP-Net can in turn improve periodicity detection, leading to better image completion for both BPI and NPP-Net (see supplementary).

Influence of Mask Size: We conduct two experiments to study the influence of different mask sizes for image completion.

First, for each image in NRTDB dataset, if the K-th periodicity has the smallest error among Top-3 periodicities, we assign the image to the K-th periodicity. We show the number of images assigned to each periodicity with different mask sizes in Figure 7 (left). While the Top 1st periodicity is the best one for most of the images with small mask size, this number decreases in the large mask case (64% of the image). This demonstrates that the other periodicities contain better periodicity and leveraging them by our periodicity augmentation strategy can be helpful for learning NPP representation, especially when the mask is large.

To compute the periodicity error, we manually annotate the periodicity with the smallest period as ground truth periodicity for the dataset. For each periodicity, we generate a 2D point cloud, defined as . We also filter points that are out of image range. The periodicity error is calculated using the average L2 distance between every point in proposed point clouds and its nearest neighbor in the ground truth point cloud (one-directional chamfer distance).

Second, we show the LPIPS performances for different mask sizes in Figure 7 (right). In particular, we filter out images that contain large non-periodic regions. When the mask area is small (4% of the image), PatchMatch slightly outperforms NPP-Net because the unknown regions may not contain pattern structure, and simply sampling nearby patches is sufficient to produce good results. Among single-image methods, Huang et al., BPI and NPP-Net perform better when the mask size increases since they are guided by periodicity. Taking better advantage of periodicity information, NPP-Net is more robust to various mask sizes, especially for larger masks.

Figure 7: Left: The number of NPP images for each periodicity that has the smallest periodicity errors (among Top-3) with different mask sizes in NRTDB. As the mask size grows, the best periodicity emerges in the Top 2nd and 3rd periodicity, thus utilizing them in NPP-Net is useful. Right: The LPIPS results (lower is better) for different mask sizes in NRTDB. PatchMatch performs the best when masks are very small but NPP-Net outperforms all the baselines for large masks.

Limitations: (1) The periodicity proposal cannot be too erroneous, allowing tolerance of about 10%. (2) It assumes a multi-planar scene with translated, circular, and potentially other types of symmetrical NPP that can be modeled.

5 Conclusion

In conclusion, we show how to learn an effective implicit neural representation of Near-Periodic Patterns. We design the periodicity proposal, periodicity-aware input warping, periodicity-guided patch loss to maintain global consistency and local variations. We compare NPP-Net with nine baselines and eight variants on three datasets to demonstrate its effectiveness. We believe that NPP-Net is a strong tool to understand a large class of man-made scenes.

Acknowledgement: This work was supported by a gift from Zillow Group, USA, and NSF Grants #CNS-2038612, #IIS-1900821.

References

  • [1] C. Barnes, E. Shechtman, A. Finkelstein, and D. B. Goldman (2009) PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Trans. Graph., pp. 24. Cited by: §1, §1, §2, Table 1, §4.
  • [2] C. Barnes, E. Shechtman, D. B. Goldman, and A. Finkelstein (2010) The generalized patchmatch correspondence algorithm. In

    European Conference on Computer Vision (ECCV)

    ,
    pp. 29–43. Cited by: §2.
  • [3] J. T. Barron (2019) A general and adaptive robust loss function. In

    IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    ,
    pp. 4331–4339. Cited by: §3.3.1.
  • [4] M. Bemana, K. Myszkowski, H. Seidel, and T. Ritschel (2020) X-fields: implicit neural view-, light-and time-image interpolation. ACM Transactions on Graphics (TOG), pp. 1–15. Cited by: §2.
  • [5] C. Cao and Y. Fu (2021)

    Learning a sketch tensor space for image inpainting of man-made scenes

    .
    In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 14509–14518. Cited by: §2.
  • [6] H. Chen, J. Liu, W. Chen, S. Liu, and Y. Zhao (2022) Exemplar-based pattern synthesis with implicit periodic field network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3708–3717. Cited by: §3.1.
  • [7] Y. Chen, S. Liu, and X. Wang (2021) Learning continuous image representation with local implicit image function. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8628–8638. Cited by: §2.
  • [8] Z. Chen and H. Zhang (2019) Learning implicit fields for generative shape modeling. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5939–5948. Cited by: §2.
  • [9] M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, and A. Vedaldi (2014) Describing textures in the wild. In Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, Table 1, §4, §4.
  • [10] A. A. Efros and W. T. Freeman (2001) Image quilting for texture synthesis and transfer. In International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), pp. 341–346. Cited by: §1, Table 1, §4.
  • [11] K. Genova, F. Cole, D. Vlasic, A. Sarna, W. T. Freeman, and T. Funkhouser (2019) Learning shape templates with structured implicit functions. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7154–7164. Cited by: §2.
  • [12] T. Halperin, H. Hakim, O. Vantzos, G. Hochman, N. Benaim, L. Sassy, M. Kupchik, O. Bibi, and O. Fried (2021) Endless loops: detecting and animating periodic patterns in still images. ACM Transactions on graphics (TOG). Cited by: §2.
  • [13] J. Hays, M. Leordeanu, A. A. Efros, and Y. Liu (2006) Discovering texture regularity as a higher-order correspondence problem. In European Conference on Computer Vision (ECCV), pp. 522–535. Cited by: §2.
  • [14] K. He and J. Sun (2012) Statistics of patch offsets for image completion. In European conference on computer vision, pp. 16–29. Cited by: §2.
  • [15] J. Huang, S. B. Kang, N. Ahuja, and J. Kopf (2014) Image completion using planar structure guidance. ACM Transactions on graphics (TOG), pp. 1–10. Cited by: §1, §1, §2, §2, §3.4, Table 1, §4.
  • [16] N. Jetchev, U. Bergmann, and R. Vollgraf (2016)

    Texture synthesis with spatial generative adversarial networks

    .
    arXiv preprint arXiv:1611.08207. Cited by: §3.1.
  • [17] B. Jiri, S. Jan, K. Jan, and D. Habart (2017) Supervised and unsupervised segmentation using superpixels, model estimation, and graph cut. Journal of Electronic Imaging. Cited by: §3.5, §4.
  • [18] T. Karras, T. Aila, S. Laine, and J. Lehtinen (2017) Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196. Cited by: §4.
  • [19] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al. (2017)

    Photo-realistic single image super-resolution using a generative adversarial network

    .
    In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4681–4690. Cited by: §4.
  • [20] L. Lettry, M. Perdoch, K. Vanhoey, and L. Van Gool (2017) Repeated pattern detection using cnn activations. In IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 47–55. Cited by: §2.
  • [21] J. Li, N. Wang, L. Zhang, B. Du, and D. Tao (2020) Recurrent feature reasoning for image inpainting. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2.
  • [22] Y. Li, J. Mao, X. Zhang, W. T. Freeman, J. B. Tenenbaum, N. Snavely, and J. Wu (2020) Multi-plane program induction with 3d box priors. In Neural Information Processing Systems (NeurIPS), Cited by: 0(c), Figure 1, §1, §1, §2, §3.4, Table 1, §4.
  • [23] Y. Li, J. Mao, X. Zhang, W. T. Freeman, J. B. Tenenbaum, and J. Wu (2020) Perspective plane program induction from a single image. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4434–4443. Cited by: §1, §1, §2, §2, §3.1, §3.4, §3.4, §4.
  • [24] J. Liu and Y. Liu (2013) Grasp recurring patterns from a single view. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2003–2010. Cited by: §2.
  • [25] S. Liu, T. Ng, K. Sunkavalli, M. N. Do, E. Shechtman, and N. Carr (2015) PatchMatch-based automatic lattice detection for near-regular textures. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 181–189. Cited by: §2.
  • [26] Y. Liu, R. T. Collins, and Y. Tsin (2004) A computational model for periodic pattern perception based on frieze and wallpaper groups. IEEE transactions on pattern analysis and machine intelligence (TPAMI), pp. 354–371. Cited by: §2.
  • [27] Y. Liu, W. Lin, and J. Hays (2004) Near-regular texture analysis and manipulation. ACM Transactions on Graphics (TOG), pp. 368–376. Cited by: §2.
  • [28] Y. Liu and W. Lin (2003) Deformable texture: the irregular-regular-irregular cycle. Carnegie Mellon University, the Robotics Institute. Cited by: §2.
  • [29] Y. Liu, Y. Tsin, and W. Lin (2005) The promise and perils of near-regular texture. International Journal of Computer Vision (IJCV), pp. 145–159. Cited by: §2.
  • [30] D. G. Lowe (1999) Object recognition from local scale-invariant features. In Proceedings of the IEEE international conference on computer vision (ICCV), pp. 1150–1157. Cited by: §2.
  • [31] J. Mao, X. Zhang, Y. Li, W. T. Freeman, J. B. Tenenbaum, and J. Wu (2019) Program-guided image manipulators. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4030–4039. Cited by: §1, §1, §2, §3.4.
  • [32] R. Martin-Brualla, N. Radwan, M. S. Sajjadi, J. T. Barron, A. Dosovitskiy, and D. Duckworth (2021) Nerf in the wild: neural radiance fields for unconstrained photo collections. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7210–7219. Cited by: §2.
  • [33] R. Mechrez, I. Talmi, and L. Zelnik-Manor (2018) The contextual loss for image transformation with non-aligned data. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 768–783. Cited by: §3.3.2.
  • [34] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng (2020) Nerf: representing scenes as neural radiance fields for view synthesis. In European conference on computer vision (ECCV), pp. 405–421. Cited by: §1, §2, §2, §3.1, §3.3.1, §3.3.1.
  • [35] K. Nazeri, E. Ng, T. Joseph, F. Qureshi, and M. Ebrahimi (2019) EdgeConnect: structure guided image inpainting using edge prediction. In The IEEE International Conference on Computer Vision (ICCV) Workshops, Cited by: §1, §2.
  • [36] M. Park, K. Brocklehurst, R. T. Collins, and Y. Liu (2009) Deformed lattice detection in real-world images using mean-shift belief propagation. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), pp. 1804–1816. Cited by: §2.
  • [37] M. Park, K. Brocklehurst, R. T. Collins, and Y. Liu (2010) Translation-symmetry-based perceptual grouping with applications to urban scenes. In Asian conference on computer vision (ACCV), pp. 329–342. Cited by: §2.
  • [38] M. Park, R. T. Collins, and Y. Liu (2008) Deformed lattice discovery via efficient mean-shift belief propagation. In European Conference on Computer Vision (ECCV), pp. 474–485. Cited by: §2.
  • [39] G. Parmar, R. Zhang, and J. Zhu (2022) On aliased resizing and surprising subtleties in gan evaluation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.
  • [40] J. Pritts, O. Chum, and J. Matas (2014) Detection, rectification and segmentation of coplanar repeated patterns. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2973–2980. Cited by: §2.
  • [41] J. Pritts, D. Rozumnyi, M. P. Kumar, and O. Chum (2016) Coplanar repeats by energy minimization. In Proceedings of the British Machine Vision Conference (BMVC), pp. 107.1–107.12. Cited by: §2.
  • [42] PSU Near-Regular Texture Database. Note: http://vivid.cse.psu.edu/ Cited by: §1, Table 1, §4.
  • [43] A.R. Rao and G.L. Lohse (1993) Identifying high level features of texture perception. CVGIP: Graphical Models and Image Processing, pp. 218–233. Cited by: §1.
  • [44] V. Sitzmann, J. Martel, A. Bergman, D. Lindell, and G. Wetzstein (2020) Implicit neural representations with periodic activation functions. Advances in Neural Information Processing Systems. Cited by: §1, §1, §2, §2, Table 1, §4.
  • [45] I. Skorokhodov, S. Ignatyev, and M. Elhoseiny (2021) Adversarial generation of continuous images. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10753–10764. Cited by: §2.
  • [46] R. Suvorov, E. Logacheva, A. Mashikhin, A. Remizova, A. Ashukha, A. Silvestrov, N. Kong, H. Goka, K. Park, and V. Lempitsky (2022) Resolution-robust large mask inpainting with fourier convolutions. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 2149–2159. Cited by: §1, §1, §2, Table 1, §4.
  • [47] M. Tancik, P. P. Srinivasan, B. Mildenhall, S. Fridovich-Keil, N. Raghavan, U. Singhal, R. Ramamoorthi, J. T. Barron, and R. Ng (2020) Fourier features let networks learn high frequency functions in low dimensional domains. In Neural Information Processing Systems (NeurIPS), Cited by: §3.1, §3.3.1.
  • [48] O. Teboul, L. Simon, P. Koutsourakis, and N. Paragios (2010) Segmentation of building facades using procedural shape priors. In 2010 IEEE computer society conference on computer vision and pattern recognition, pp. 3105–3112. Cited by: §1, Table 1, §4.
  • [49] A. Torii, J. Sivic, T. Pajdla, and M. Okutomi (2013) Visual place recognition with repetitive structures. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 883–890. Cited by: §2.
  • [50] D. Ulyanov, A. Vedaldi, and V. Lempitsky (2018) Deep image prior. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9446–9454. Cited by: §1, §1, Table 1, §4.
  • [51] T. Wang, H. Ouyang, and Q. Chen (2021) Image inpainting with external-internal learning and monochromic bottleneck. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5120–5129. Cited by: §2.
  • [52] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli (2004) Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, pp. 600–612. Cited by: §4.
  • [53] Y. Wexler, E. Shechtman, and M. Irani (2007) Space-time completion of video. IEEE Transactions on pattern analysis and machine intelligence, pp. 463–476. Cited by: §2.
  • [54] C. Wu, J. Frahm, and M. Pollefeys (2010) Detecting large repetitive structures with salient boundaries. In European conference on computer vision (ECCV), pp. 142–155. Cited by: §2.
  • [55] K. Xu, M. Zhang, J. Li, S. S. Du, K. Kawarabayashi, and S. Jegelka (2021)

    How neural networks extrapolate: from feedforward to graph neural networks

    .
    In International Conference on Learning Representations (ICLR), Cited by: §3.1, §3.2.
  • [56] Z. Yu, J. Zheng, D. Lian, Z. Zhou, and S. Gao (2019) Single-image piece-wise planar 3d reconstruction via associative embedding. In CVPR, pp. 1029–1037. Cited by: §3.5.
  • [57] Y. Zeng, J. Fu, H. Chao, and B. Guo (2019) Learning pyramid-context encoder network for high-quality image inpainting. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1486–1494. Cited by: §1, §1, §2, Table 1, §4.
  • [58] Y. Zeng, Z. Lin, J. Yang, J. Zhang, E. Shechtman, and H. Lu (2020) High-resolution image inpainting with iterative confidence feedback and guided upsampling. In European Conference on Computer Vision (ECCV), pp. 1–17. Cited by: §1, §1, §2, Table 1, §4.
  • [59] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang (2018) The unreasonable effectiveness of deep features as a perceptual metric. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §3.3.2, §4.
  • [60] Z. Zhang, A. Ganesh, X. Liang, and Y. Ma (2012) TILT: transform invariant low-rank textures. International journal of computer vision (IJCV), pp. 1–24. Cited by: §3.5, §3, §4.
  • [61] J. Zheng, S. Ramasinghe, and S. Lucey (2021) Rethinking positional encoding. arXiv preprint arXiv:2107.02561. Cited by: §3.3.1.
  • [62] B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba (2017)

    Places: a 10 million image database for scene recognition

    .
    IEEE transactions on pattern analysis and machine intelligence, pp. 1452–1464. Cited by: §4.
  • [63] Y. Zhou, C. Barnes, E. Shechtman, and S. Amirghodsi (2021) TransFill: reference-guided image inpainting by merging multiple color and spatial transformations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2266–2276. Cited by: §2.
  • [64] L. Ziyin, T. Hartwig, and M. Ueda (2020) Neural networks fail to learn periodic functions and how to fix it. In Neural Information Processing Systems (NeurIPS), pp. 1583–1594. Cited by: §3.1, §3.2, §3.3.2.

chacesurniarten.blogspot.com

Source: https://deepai.org/publication/learning-continuous-implicit-representation-for-near-periodic-patterns

Post a Comment for "A Pattern With Periodic Openings Including a Continuous Intersection Partwafer"